BalancedSecAgg: Toward Fast Secure Aggregation for Federated Learning
Federated learning is a promising collaborative learning system from the perspective of training data privacy preservation; however, there is a risk of privacy leakage from individual local models of users. Secure aggregation protocols based on local model masking are a promising solution to prevent...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2024-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10744018/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1846163829217558528 |
|---|---|
| author | Hiroki Masuda Kentaro Kita Yuki Koizumi Junji Takemasa Toru Hasegawa |
| author_facet | Hiroki Masuda Kentaro Kita Yuki Koizumi Junji Takemasa Toru Hasegawa |
| author_sort | Hiroki Masuda |
| collection | DOAJ |
| description | Federated learning is a promising collaborative learning system from the perspective of training data privacy preservation; however, there is a risk of privacy leakage from individual local models of users. Secure aggregation protocols based on local model masking are a promising solution to prevent privacy leakage. Existing secure aggregation protocols sacrifice either computation or communication costs to tolerate user dropouts. A naive secure aggregation protocol achieves a small communication cost by secretly sharing random seeds instead of random masks. However, it requires that a server incurs a substantial computation cost to reconstruct the random masks from the random seeds of dropout users. To avoid such a reconstruction, a state-of-the-art secure aggregation protocol secretly shares random masks. Although this approach avoids the computation cost of mask reconstruction, it incurs a large communication cost due to secretly sharing random masks. In this paper, we design a secure aggregation protocol to mitigate the tradeoff between the computation cost and the communication cost by complementing both types of secure aggregation protocols. In our experiments, our protocol achieves up to 11.41 times faster while achieving the same level of privacy preservation and dropout tolerance as the existing protocols. |
| format | Article |
| id | doaj-art-42bf46fec4344a0489e4c31795bb6d1e |
| institution | Kabale University |
| issn | 2169-3536 |
| language | English |
| publishDate | 2024-01-01 |
| publisher | IEEE |
| record_format | Article |
| series | IEEE Access |
| spelling | doaj-art-42bf46fec4344a0489e4c31795bb6d1e2024-11-19T00:02:36ZengIEEEIEEE Access2169-35362024-01-011216526516527910.1109/ACCESS.2024.349177910744018BalancedSecAgg: Toward Fast Secure Aggregation for Federated LearningHiroki Masuda0https://orcid.org/0009-0002-3948-3671Kentaro Kita1https://orcid.org/0000-0002-7982-3530Yuki Koizumi2https://orcid.org/0000-0002-9254-6558Junji Takemasa3https://orcid.org/0000-0002-5361-1855Toru Hasegawa4https://orcid.org/0000-0002-8925-1732Graduate School of Information Science and Technology, Osaka University, Osaka, JapanGraduate School of Information Science and Technology, Osaka University, Osaka, JapanGraduate School of Information Science and Technology, Osaka University, Osaka, JapanGraduate School of Information Science and Technology, Osaka University, Osaka, JapanFaculty of Materials for Energy, Shimane University, Matsue, Shimane, JapanFederated learning is a promising collaborative learning system from the perspective of training data privacy preservation; however, there is a risk of privacy leakage from individual local models of users. Secure aggregation protocols based on local model masking are a promising solution to prevent privacy leakage. Existing secure aggregation protocols sacrifice either computation or communication costs to tolerate user dropouts. A naive secure aggregation protocol achieves a small communication cost by secretly sharing random seeds instead of random masks. However, it requires that a server incurs a substantial computation cost to reconstruct the random masks from the random seeds of dropout users. To avoid such a reconstruction, a state-of-the-art secure aggregation protocol secretly shares random masks. Although this approach avoids the computation cost of mask reconstruction, it incurs a large communication cost due to secretly sharing random masks. In this paper, we design a secure aggregation protocol to mitigate the tradeoff between the computation cost and the communication cost by complementing both types of secure aggregation protocols. In our experiments, our protocol achieves up to 11.41 times faster while achieving the same level of privacy preservation and dropout tolerance as the existing protocols.https://ieeexplore.ieee.org/document/10744018/Dropout tolerancefederated learningprivacy preservationsecure aggregation |
| spellingShingle | Hiroki Masuda Kentaro Kita Yuki Koizumi Junji Takemasa Toru Hasegawa BalancedSecAgg: Toward Fast Secure Aggregation for Federated Learning IEEE Access Dropout tolerance federated learning privacy preservation secure aggregation |
| title | BalancedSecAgg: Toward Fast Secure Aggregation for Federated Learning |
| title_full | BalancedSecAgg: Toward Fast Secure Aggregation for Federated Learning |
| title_fullStr | BalancedSecAgg: Toward Fast Secure Aggregation for Federated Learning |
| title_full_unstemmed | BalancedSecAgg: Toward Fast Secure Aggregation for Federated Learning |
| title_short | BalancedSecAgg: Toward Fast Secure Aggregation for Federated Learning |
| title_sort | balancedsecagg toward fast secure aggregation for federated learning |
| topic | Dropout tolerance federated learning privacy preservation secure aggregation |
| url | https://ieeexplore.ieee.org/document/10744018/ |
| work_keys_str_mv | AT hirokimasuda balancedsecaggtowardfastsecureaggregationforfederatedlearning AT kentarokita balancedsecaggtowardfastsecureaggregationforfederatedlearning AT yukikoizumi balancedsecaggtowardfastsecureaggregationforfederatedlearning AT junjitakemasa balancedsecaggtowardfastsecureaggregationforfederatedlearning AT toruhasegawa balancedsecaggtowardfastsecureaggregationforfederatedlearning |