Membership Inference Attacks Fueled by Few-Shot Learning to Detect Privacy Leakage and Address Data Integrity

Deep learning models have an intrinsic privacy issue as they memorize parts of their training data, creating a privacy leakage. Membership inference attacks (MIAs) exploit this to obtain confidential information about the data used for training, aiming to steal information. They can be repurposed as...

Full description

Saved in:
Bibliographic Details
Main Authors: Daniel Jiménez-López, Nuria Rodríguez-Barroso, M. Victoria Luzón, Javier Del Ser, Francisco Herrera
Format: Article
Language:English
Published: MDPI AG 2025-05-01
Series:Machine Learning and Knowledge Extraction
Subjects:
Online Access:https://www.mdpi.com/2504-4990/7/2/43
Tags: Add Tag
No Tags, Be the first to tag this record!