A Backdoor Approach With Inverted Labels Using Dirty Label-Flipping Attacks
Audio-based machine learning systems frequently use public or third-party data, which might be inaccurate. This exposes deep neural network (DNN) models trained on such data to potential data poisoning attacks. In this type of assault, attackers can train the DNN model using poisoned data, potential...
Saved in:
| Main Author: | Orson Mengara |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10483076/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Effective defense against physically embedded backdoor attacks via clustering-based filtering
by: Mohammed Kutbi
Published: (2025-04-01) -
FLARE: A Backdoor Attack to Federated Learning with Refined Evasion
by: Qingya Wang, et al.
Published: (2024-11-01) -
A4FL: Federated Adversarial Defense via Adversarial Training and Pruning Against Backdoor Attack
by: Saeed-Uz-Zaman, et al.
Published: (2025-01-01) -
A Backdoor Attack Against LSTM-Based Text Classification Systems
by: Jiazhu Dai, et al.
Published: (2019-01-01) -
Improved Distributed Backdoor Attacks in Federated Learning by Density-Adaptive Data Poisoning and Projection-Based Gradient Updating
by: Jian Wang, et al.
Published: (2025-01-01)