Privacy-preserving federated learning framework with dynamic weight aggregation

There are two problems with the privacy-preserving federal learning framework under an unreliable central server.① A fixed weight, typically the size of each participant’s dataset, is used when aggregating distributed learning models on the central server.However, different participants have non-ind...

Full description

Saved in:
Bibliographic Details
Main Authors: Zuobin YING, Yichen FANG, Yiwen ZHANG
Format: Article
Language:English
Published: POSTS&TELECOM PRESS Co., LTD 2022-10-01
Series:网络与信息安全学报
Subjects:
Online Access:http://www.cjnis.com.cn/thesisDetails#10.11959/j.issn.2096-109x.2022069
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1841529690018283520
author Zuobin YING
Yichen FANG
Yiwen ZHANG
author_facet Zuobin YING
Yichen FANG
Yiwen ZHANG
author_sort Zuobin YING
collection DOAJ
description There are two problems with the privacy-preserving federal learning framework under an unreliable central server.① A fixed weight, typically the size of each participant’s dataset, is used when aggregating distributed learning models on the central server.However, different participants have non-independent and homogeneously distributed data, then setting fixed aggregation weights would prevent the global model from achieving optimal utility.② Existing frameworks are built on the assumption that the central server is honest, and do not consider the problem of data privacy leakage of participants due to the untrustworthiness of the central server.To address the above issues, based on the popular DP-FedAvg algorithm, a privacy-preserving federated learning DP-DFL algorithm for dynamic weight aggregation under a non-trusted central server was proposed which set a dynamic model aggregation weight.The proposed algorithm learned the model aggregation weight in federated learning directly from the data of different participants, and thus it is applicable to non-independent homogeneously distributed data environment.In addition, the privacy of model parameters was protected using noise in the local model privacy protection phase, which satisfied the untrustworthy central server setting and thus reduced the risk of privacy leakage in the upload of model parameters from local participants.Experiments on dataset CIFAR-10 demonstrate that the DP-DFL algorithm not only provides local privacy guarantees, but also achieves higher accuracy rates with an average accuracy improvement of 2.09% compared to the DP-FedAvg algorithm models.
format Article
id doaj-art-3fcff7065c68435ab6330676e01c25fe
institution Kabale University
issn 2096-109X
language English
publishDate 2022-10-01
publisher POSTS&TELECOM PRESS Co., LTD
record_format Article
series 网络与信息安全学报
spelling doaj-art-3fcff7065c68435ab6330676e01c25fe2025-01-15T03:16:10ZengPOSTS&TELECOM PRESS Co., LTD网络与信息安全学报2096-109X2022-10-018566559575047Privacy-preserving federated learning framework with dynamic weight aggregationZuobin YINGYichen FANGYiwen ZHANGThere are two problems with the privacy-preserving federal learning framework under an unreliable central server.① A fixed weight, typically the size of each participant’s dataset, is used when aggregating distributed learning models on the central server.However, different participants have non-independent and homogeneously distributed data, then setting fixed aggregation weights would prevent the global model from achieving optimal utility.② Existing frameworks are built on the assumption that the central server is honest, and do not consider the problem of data privacy leakage of participants due to the untrustworthiness of the central server.To address the above issues, based on the popular DP-FedAvg algorithm, a privacy-preserving federated learning DP-DFL algorithm for dynamic weight aggregation under a non-trusted central server was proposed which set a dynamic model aggregation weight.The proposed algorithm learned the model aggregation weight in federated learning directly from the data of different participants, and thus it is applicable to non-independent homogeneously distributed data environment.In addition, the privacy of model parameters was protected using noise in the local model privacy protection phase, which satisfied the untrustworthy central server setting and thus reduced the risk of privacy leakage in the upload of model parameters from local participants.Experiments on dataset CIFAR-10 demonstrate that the DP-DFL algorithm not only provides local privacy guarantees, but also achieves higher accuracy rates with an average accuracy improvement of 2.09% compared to the DP-FedAvg algorithm models.http://www.cjnis.com.cn/thesisDetails#10.11959/j.issn.2096-109x.2022069federated learningdifferential privacydynamic aggregation weightnon-independent and identically distributed data
spellingShingle Zuobin YING
Yichen FANG
Yiwen ZHANG
Privacy-preserving federated learning framework with dynamic weight aggregation
网络与信息安全学报
federated learning
differential privacy
dynamic aggregation weight
non-independent and identically distributed data
title Privacy-preserving federated learning framework with dynamic weight aggregation
title_full Privacy-preserving federated learning framework with dynamic weight aggregation
title_fullStr Privacy-preserving federated learning framework with dynamic weight aggregation
title_full_unstemmed Privacy-preserving federated learning framework with dynamic weight aggregation
title_short Privacy-preserving federated learning framework with dynamic weight aggregation
title_sort privacy preserving federated learning framework with dynamic weight aggregation
topic federated learning
differential privacy
dynamic aggregation weight
non-independent and identically distributed data
url http://www.cjnis.com.cn/thesisDetails#10.11959/j.issn.2096-109x.2022069
work_keys_str_mv AT zuobinying privacypreservingfederatedlearningframeworkwithdynamicweightaggregation
AT yichenfang privacypreservingfederatedlearningframeworkwithdynamicweightaggregation
AT yiwenzhang privacypreservingfederatedlearningframeworkwithdynamicweightaggregation