Batch-in-Batch: a new adversarial training framework for initial perturbation and sample selection

Abstract Adversarial training methods commonly generate initial perturbations that are independent across epochs, and obtain subsequent adversarial training samples without selection. Consequently, such methods may limit thorough probing of the vicinity around the original samples and possibly lead...

Full description

Saved in:
Bibliographic Details
Main Authors: Yinting Wu, Pai Peng, Bo Cai, Le Li
Format: Article
Language:English
Published: Springer 2025-01-01
Series:Complex & Intelligent Systems
Subjects:
Online Access:https://doi.org/10.1007/s40747-024-01704-9
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1823861508113694720
author Yinting Wu
Pai Peng
Bo Cai
Le Li
author_facet Yinting Wu
Pai Peng
Bo Cai
Le Li
author_sort Yinting Wu
collection DOAJ
description Abstract Adversarial training methods commonly generate initial perturbations that are independent across epochs, and obtain subsequent adversarial training samples without selection. Consequently, such methods may limit thorough probing of the vicinity around the original samples and possibly lead to unnecessary or even detrimental training. In this work, a simple yet effective training framework, called Batch-in-Batch (BB), is proposed to refine adversarial training from these two perspectives. The framework jointly generates m sets of initial perturbations for each original sample, seeking to provide high quality adversarial samples by fully exploring the vicinity. Then, it incorporates a sample selection procedure to prioritize training on higher-quality adversarial samples. Through extensive experiments on three benchmark datasets with two network architectures in both single-step (Noise-Fast Gradient Sign Method, N-FGSM) and multi-step (Projected Gradient Descent, PGD) scenarios, models trained within the BB framework consistently demonstrate superior adversarial accuracy across various adversarial settings, notably achieving an improvement of more than 13% on the SVHN dataset with an attack radius of 8/255 compared to N-FGSM. The analysis further demonstrates the efficiency and mechanisms of the proposed initial perturbation design and sample selection strategies. Finally, results concerning training time indicate that the BB framework is computational-effective, even with a relatively large m.
format Article
id doaj-art-d65f3b6dab824d84b69927a02abb3a79
institution Kabale University
issn 2199-4536
2198-6053
language English
publishDate 2025-01-01
publisher Springer
record_format Article
series Complex & Intelligent Systems
spelling doaj-art-d65f3b6dab824d84b69927a02abb3a792025-02-09T13:01:12ZengSpringerComplex & Intelligent Systems2199-45362198-60532025-01-0111211810.1007/s40747-024-01704-9Batch-in-Batch: a new adversarial training framework for initial perturbation and sample selectionYinting Wu0Pai Peng1Bo Cai2Le Li3Key Lab NAA-MOE, School of Mathematics and Statistics, Central China Normal UniversitySchool of Mathematics and Computer Science, Jianghan UniversityKey Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, and School of Cyber Science and Engineering, Wuhan UniversityKey Lab NAA-MOE, School of Mathematics and Statistics, Central China Normal UniversityAbstract Adversarial training methods commonly generate initial perturbations that are independent across epochs, and obtain subsequent adversarial training samples without selection. Consequently, such methods may limit thorough probing of the vicinity around the original samples and possibly lead to unnecessary or even detrimental training. In this work, a simple yet effective training framework, called Batch-in-Batch (BB), is proposed to refine adversarial training from these two perspectives. The framework jointly generates m sets of initial perturbations for each original sample, seeking to provide high quality adversarial samples by fully exploring the vicinity. Then, it incorporates a sample selection procedure to prioritize training on higher-quality adversarial samples. Through extensive experiments on three benchmark datasets with two network architectures in both single-step (Noise-Fast Gradient Sign Method, N-FGSM) and multi-step (Projected Gradient Descent, PGD) scenarios, models trained within the BB framework consistently demonstrate superior adversarial accuracy across various adversarial settings, notably achieving an improvement of more than 13% on the SVHN dataset with an attack radius of 8/255 compared to N-FGSM. The analysis further demonstrates the efficiency and mechanisms of the proposed initial perturbation design and sample selection strategies. Finally, results concerning training time indicate that the BB framework is computational-effective, even with a relatively large m.https://doi.org/10.1007/s40747-024-01704-9Adversarial trainingSample selectionComputer visionRobustness
spellingShingle Yinting Wu
Pai Peng
Bo Cai
Le Li
Batch-in-Batch: a new adversarial training framework for initial perturbation and sample selection
Complex & Intelligent Systems
Adversarial training
Sample selection
Computer vision
Robustness
title Batch-in-Batch: a new adversarial training framework for initial perturbation and sample selection
title_full Batch-in-Batch: a new adversarial training framework for initial perturbation and sample selection
title_fullStr Batch-in-Batch: a new adversarial training framework for initial perturbation and sample selection
title_full_unstemmed Batch-in-Batch: a new adversarial training framework for initial perturbation and sample selection
title_short Batch-in-Batch: a new adversarial training framework for initial perturbation and sample selection
title_sort batch in batch a new adversarial training framework for initial perturbation and sample selection
topic Adversarial training
Sample selection
Computer vision
Robustness
url https://doi.org/10.1007/s40747-024-01704-9
work_keys_str_mv AT yintingwu batchinbatchanewadversarialtrainingframeworkforinitialperturbationandsampleselection
AT paipeng batchinbatchanewadversarialtrainingframeworkforinitialperturbationandsampleselection
AT bocai batchinbatchanewadversarialtrainingframeworkforinitialperturbationandsampleselection
AT leli batchinbatchanewadversarialtrainingframeworkforinitialperturbationandsampleselection