Shuffle Model of Differential Privacy: Numerical Composition for Federated Learning

In decentralized scenarios without fully trustable parties (e.g., in mobile edge computing or IoT environments), the shuffle model has recently emerged as a promising paradigm for differentially private federated learning. Despite many efforts of privacy accounting for federated learning with many s...

Full description

Saved in:
Bibliographic Details
Main Authors: Shaowei Wang, Sufen Zeng, Jin Li, Shaozheng Huang, Yuyang Chen
Format: Article
Language:English
Published: MDPI AG 2025-02-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/15/3/1595
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850068302494171136
author Shaowei Wang
Sufen Zeng
Jin Li
Shaozheng Huang
Yuyang Chen
author_facet Shaowei Wang
Sufen Zeng
Jin Li
Shaozheng Huang
Yuyang Chen
author_sort Shaowei Wang
collection DOAJ
description In decentralized scenarios without fully trustable parties (e.g., in mobile edge computing or IoT environments), the shuffle model has recently emerged as a promising paradigm for differentially private federated learning. Despite many efforts of privacy accounting for federated learning with many sequential rounds in the shuffle model, they suffer from generality and tightness. For example, existing accounting methods are targeted to single-message shuffle protocols (which have intrinsic utility barriers compared to multi-message ones), and are untight for the commonly used vector randomized response randomizer. As countermeasures, we first present a tight total variation characterization of vector randomized response randomizers in the shuffle model, which demonstrates over 20% budget conservation. We then unify the representation of single-message and multi-message shuffle protocols and derive their privacy loss distribution (PLD). The PLDs are finally composed by Fourier analysis to obtain the overall privacy loss of many sequential rounds in the shuffle model. Through simulations in federated decision tree building and federated deep learning, we show that our approach saves up to 80% budget when compared to existing methods.
format Article
id doaj-art-9d06ce6f89ae42098e43a7bb290dddcf
institution DOAJ
issn 2076-3417
language English
publishDate 2025-02-01
publisher MDPI AG
record_format Article
series Applied Sciences
spelling doaj-art-9d06ce6f89ae42098e43a7bb290dddcf2025-08-20T02:48:06ZengMDPI AGApplied Sciences2076-34172025-02-01153159510.3390/app15031595Shuffle Model of Differential Privacy: Numerical Composition for Federated LearningShaowei Wang0Sufen Zeng1Jin Li2Shaozheng Huang3Yuyang Chen4School of Artificial Intelligence, Guangzhou University, Guangzhou 510700, ChinaSchool of Artificial Intelligence, Guangzhou University, Guangzhou 510700, ChinaSchool of Artificial Intelligence, Guangzhou University, Guangzhou 510700, ChinaSchool of Artificial Intelligence, Guangzhou University, Guangzhou 510700, ChinaSchool of Artificial Intelligence, Guangzhou University, Guangzhou 510700, ChinaIn decentralized scenarios without fully trustable parties (e.g., in mobile edge computing or IoT environments), the shuffle model has recently emerged as a promising paradigm for differentially private federated learning. Despite many efforts of privacy accounting for federated learning with many sequential rounds in the shuffle model, they suffer from generality and tightness. For example, existing accounting methods are targeted to single-message shuffle protocols (which have intrinsic utility barriers compared to multi-message ones), and are untight for the commonly used vector randomized response randomizer. As countermeasures, we first present a tight total variation characterization of vector randomized response randomizers in the shuffle model, which demonstrates over 20% budget conservation. We then unify the representation of single-message and multi-message shuffle protocols and derive their privacy loss distribution (PLD). The PLDs are finally composed by Fourier analysis to obtain the overall privacy loss of many sequential rounds in the shuffle model. Through simulations in federated decision tree building and federated deep learning, we show that our approach saves up to 80% budget when compared to existing methods.https://www.mdpi.com/2076-3417/15/3/1595differential privacyfederated learningdecision treesprivacy amplificationprivacy composition
spellingShingle Shaowei Wang
Sufen Zeng
Jin Li
Shaozheng Huang
Yuyang Chen
Shuffle Model of Differential Privacy: Numerical Composition for Federated Learning
Applied Sciences
differential privacy
federated learning
decision trees
privacy amplification
privacy composition
title Shuffle Model of Differential Privacy: Numerical Composition for Federated Learning
title_full Shuffle Model of Differential Privacy: Numerical Composition for Federated Learning
title_fullStr Shuffle Model of Differential Privacy: Numerical Composition for Federated Learning
title_full_unstemmed Shuffle Model of Differential Privacy: Numerical Composition for Federated Learning
title_short Shuffle Model of Differential Privacy: Numerical Composition for Federated Learning
title_sort shuffle model of differential privacy numerical composition for federated learning
topic differential privacy
federated learning
decision trees
privacy amplification
privacy composition
url https://www.mdpi.com/2076-3417/15/3/1595
work_keys_str_mv AT shaoweiwang shufflemodelofdifferentialprivacynumericalcompositionforfederatedlearning
AT sufenzeng shufflemodelofdifferentialprivacynumericalcompositionforfederatedlearning
AT jinli shufflemodelofdifferentialprivacynumericalcompositionforfederatedlearning
AT shaozhenghuang shufflemodelofdifferentialprivacynumericalcompositionforfederatedlearning
AT yuyangchen shufflemodelofdifferentialprivacynumericalcompositionforfederatedlearning