Data Poisoning Attack on Black-Box Neural Machine Translation to Truncate Translation
Neural machine translation (NMT) systems have achieved outstanding performance and have been widely deployed in the real world. However, the undertranslation problem caused by the distribution of high-translation-entropy words in source sentences still exists, and can be aggravated by poisoning atta...
Saved in:
| Main Authors: | Lingfang Li, Weijian Hu, Mingxing Luo |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2024-12-01
|
| Series: | Entropy |
| Subjects: | |
| Online Access: | https://www.mdpi.com/1099-4300/26/12/1081 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
A Backdoor Approach With Inverted Labels Using Dirty Label-Flipping Attacks
by: Orson Mengara
Published: (2025-01-01) -
A Backdoor Attack Against LSTM-Based Text Classification Systems
by: Jiazhu Dai, et al.
Published: (2019-01-01) -
FLARE: A Backdoor Attack to Federated Learning with Refined Evasion
by: Qingya Wang, et al.
Published: (2024-11-01) -
A survey of backdoor attacks and defences: From deep neural networks to large language models
by: Ling-Xin Jin, et al.
Published: (2025-09-01) -
Multi-Targeted Textual Backdoor Attack: Model-Specific Misrecognition via Trigger Position and Word Choice
by: Taehwa Lee, et al.
Published: (2025-01-01)