Membership Inference Attacks Fueled by Few-Shot Learning to Detect Privacy Leakage and Address Data Integrity
Deep learning models have an intrinsic privacy issue as they memorize parts of their training data, creating a privacy leakage. Membership inference attacks (MIAs) exploit this to obtain confidential information about the data used for training, aiming to steal information. They can be repurposed as...
Saved in:
| Main Authors: | Daniel Jiménez-López, Nuria Rodríguez-Barroso, M. Victoria Luzón, Javier Del Ser, Francisco Herrera |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-05-01
|
| Series: | Machine Learning and Knowledge Extraction |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2504-4990/7/2/43 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
A survey on membership inference attacks and defenses in machine learning
by: Jun Niu, et al.
Published: (2024-09-01) -
Membership inference attacks against transfer learning for generalized model
by: Jinyin CHEN, et al.
Published: (2021-10-01) -
Few-shot Named Entity Recognition via encoder and class intervention
by: Long Ding, et al.
Published: (2024-01-01) -
PCA-based membership inference attack for machine learning models
by: Changgen PENG, et al.
Published: (2022-01-01) -
Membership inference attack and defense method in federated learning based on GAN
by: Jiale ZHANG, et al.
Published: (2023-05-01)