Few-Shot Intelligent Anti-Jamming Access with Fast Convergence: A GAN-Enhanced Deep Reinforcement Learning Approach
To address the small-sample training bottleneck and inadequate convergence efficiency of Deep Reinforcement Learning (DRL)-based communication anti-jamming methods in complex electromagnetic environments, this paper proposes a Generative Adversarial Network-enhanced Deep Q-Network (GA-DQN) anti-jamm...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-08-01
|
| Series: | Applied Sciences |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2076-3417/15/15/8654 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849770557170515968 |
|---|---|
| author | Tianxiao Wang Yingtao Niu Zhanyang Zhou |
| author_facet | Tianxiao Wang Yingtao Niu Zhanyang Zhou |
| author_sort | Tianxiao Wang |
| collection | DOAJ |
| description | To address the small-sample training bottleneck and inadequate convergence efficiency of Deep Reinforcement Learning (DRL)-based communication anti-jamming methods in complex electromagnetic environments, this paper proposes a Generative Adversarial Network-enhanced Deep Q-Network (GA-DQN) anti-jamming method. The method constructs a Generative Adversarial Network (GAN) to learn the time–frequency distribution characteristics of short-period jamming and to generate high-fidelity mixed samples. Furthermore, it screens qualified samples using the Pearson correlation coefficient to form a sample set, which is input into the DQN network model for pre-training to expand the experience replay buffer, effectively improving the convergence speed and decision accuracy of DQN. Our simulation results show that under periodic jamming, compared with the DQN algorithm, this algorithm significantly reduces the number of interference occurrences in the early communication stage and improves the convergence speed, to a certain extent. Under dynamic jamming and intelligent jamming, the algorithm significantly outperforms the DQN, Proximal Policy Optimization (PPO), and Q-learning (QL) algorithms. |
| format | Article |
| id | doaj-art-4900c360bbeb40c4bcc89c2f4ac9fdfa |
| institution | DOAJ |
| issn | 2076-3417 |
| language | English |
| publishDate | 2025-08-01 |
| publisher | MDPI AG |
| record_format | Article |
| series | Applied Sciences |
| spelling | doaj-art-4900c360bbeb40c4bcc89c2f4ac9fdfa2025-08-20T03:02:57ZengMDPI AGApplied Sciences2076-34172025-08-011515865410.3390/app15158654Few-Shot Intelligent Anti-Jamming Access with Fast Convergence: A GAN-Enhanced Deep Reinforcement Learning ApproachTianxiao Wang0Yingtao Niu1Zhanyang Zhou2The Sixty-Third Research Institute, National University of Defense Technology, Nanjing 210007, ChinaThe College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, ChinaThe Sixty-Third Research Institute, National University of Defense Technology, Nanjing 210007, ChinaTo address the small-sample training bottleneck and inadequate convergence efficiency of Deep Reinforcement Learning (DRL)-based communication anti-jamming methods in complex electromagnetic environments, this paper proposes a Generative Adversarial Network-enhanced Deep Q-Network (GA-DQN) anti-jamming method. The method constructs a Generative Adversarial Network (GAN) to learn the time–frequency distribution characteristics of short-period jamming and to generate high-fidelity mixed samples. Furthermore, it screens qualified samples using the Pearson correlation coefficient to form a sample set, which is input into the DQN network model for pre-training to expand the experience replay buffer, effectively improving the convergence speed and decision accuracy of DQN. Our simulation results show that under periodic jamming, compared with the DQN algorithm, this algorithm significantly reduces the number of interference occurrences in the early communication stage and improves the convergence speed, to a certain extent. Under dynamic jamming and intelligent jamming, the algorithm significantly outperforms the DQN, Proximal Policy Optimization (PPO), and Q-learning (QL) algorithms.https://www.mdpi.com/2076-3417/15/15/8654wireless communicationanti-jamminggenerative adversarial networkdeep reinforcement learningDeep Q-Network |
| spellingShingle | Tianxiao Wang Yingtao Niu Zhanyang Zhou Few-Shot Intelligent Anti-Jamming Access with Fast Convergence: A GAN-Enhanced Deep Reinforcement Learning Approach Applied Sciences wireless communication anti-jamming generative adversarial network deep reinforcement learning Deep Q-Network |
| title | Few-Shot Intelligent Anti-Jamming Access with Fast Convergence: A GAN-Enhanced Deep Reinforcement Learning Approach |
| title_full | Few-Shot Intelligent Anti-Jamming Access with Fast Convergence: A GAN-Enhanced Deep Reinforcement Learning Approach |
| title_fullStr | Few-Shot Intelligent Anti-Jamming Access with Fast Convergence: A GAN-Enhanced Deep Reinforcement Learning Approach |
| title_full_unstemmed | Few-Shot Intelligent Anti-Jamming Access with Fast Convergence: A GAN-Enhanced Deep Reinforcement Learning Approach |
| title_short | Few-Shot Intelligent Anti-Jamming Access with Fast Convergence: A GAN-Enhanced Deep Reinforcement Learning Approach |
| title_sort | few shot intelligent anti jamming access with fast convergence a gan enhanced deep reinforcement learning approach |
| topic | wireless communication anti-jamming generative adversarial network deep reinforcement learning Deep Q-Network |
| url | https://www.mdpi.com/2076-3417/15/15/8654 |
| work_keys_str_mv | AT tianxiaowang fewshotintelligentantijammingaccesswithfastconvergenceaganenhanceddeepreinforcementlearningapproach AT yingtaoniu fewshotintelligentantijammingaccesswithfastconvergenceaganenhanceddeepreinforcementlearningapproach AT zhanyangzhou fewshotintelligentantijammingaccesswithfastconvergenceaganenhanceddeepreinforcementlearningapproach |