Privacy preserving federated learning with convolutional variational bottlenecks

Abstract Gradient Inversion (GI) attacks are a ubiquitous threat in Federated Learning as they exploit gradient leakage to reconstruct supposedly private training data. Recent work has proposed to prevent gradient leakage without loss of model utility by incorporating a PRivacy EnhanCing mODulE (PRE...

Full description

Saved in:
Bibliographic Details
Main Authors: Daniel Scheliga, Patrick Mäder, Marco Seeland
Format: Article
Language:English
Published: SpringerOpen 2025-05-01
Series:Cybersecurity
Subjects:
Online Access:https://doi.org/10.1186/s42400-024-00295-9
Tags: Add Tag
No Tags, Be the first to tag this record!