Private Data Protection With Machine Unlearning for Next-Generation Networks
In next-generation networks, distributed clients collaborate to generate an aggregated global model tailored for various vertical applications. However, this convenience comes at the cost of potential privacy risks, as personal information may be exposed within the global model aggregation process....
Saved in:
| Main Authors: | , , , , , , , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Open Journal of the Communications Society |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10804198/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | In next-generation networks, distributed clients collaborate to generate an aggregated global model tailored for various vertical applications. However, this convenience comes at the cost of potential privacy risks, as personal information may be exposed within the global model aggregation process. In response, the right to be forgotten was introduced, granting individuals the right to withdraw their consent for the processing of their personal information. To address this challenge, machine unlearning has been developed, enabling models to erase any memory of private data. Previous approaches, such as retraining or incremental learning, often require additional storage or are difficult to implement in neural networks. Our method, by contrast, introduces a small perturbation to the model’s weights, guiding it to iteratively move towards a model trained only on the remaining data subset until the contribution of the unlearned data is completely removed. In our approach, machine unlearning is conceptualized as a process that iteratively adjusts the initial model to remove any trace of the forgotten data. Our key contribution is the introduction of a reference model, trained on a subset of the remaining data, which guides the target unlearning model toward successfully forgetting the data. Additionally, we discuss two evaluation methods-membership inference and backdoor evaluation-that effectively assess the success of our machine unlearning approach. These methods verify whether the private data has truly been forgotten by the target unlearning model. Through experiments on five datasets, we demonstrate the effectiveness of our approach, which is <inline-formula> <tex-math notation="LaTeX">$15\times $ </tex-math></inline-formula> faster than the traditional retraining method. |
|---|---|
| ISSN: | 2644-125X |