FE-SKViT: A Feature-Enhanced ViT Model with Skip Attention for Automatic Modulation Recognition

Automatic modulation recognition (AMR) is widely employed in communication systems. However, under conditions of low signal-to-noise ratio (SNR), recent studies reveal limitations in achieving high AMR accuracy. In this work, we introduce a novel network architecture that leverages a transformer-ins...

Full description

Saved in:
Bibliographic Details
Main Authors: Guangyao Zheng, Bo Zang, Penghui Yang, Wenbo Zhang, Bin Li
Format: Article
Language:English
Published: MDPI AG 2024-11-01
Series:Remote Sensing
Subjects:
Online Access:https://www.mdpi.com/2072-4292/16/22/4204
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Automatic modulation recognition (AMR) is widely employed in communication systems. However, under conditions of low signal-to-noise ratio (SNR), recent studies reveal limitations in achieving high AMR accuracy. In this work, we introduce a novel network architecture that leverages a transformer-inspired approach tailored for AMR, called Feature-Enhanced Transformer with skip-attention (FE-SKViT). This innovative design adeptly harnesses the advantages of translation variant convolution and the Transformer framework, handling intra-signal variance and small cross-signal variance to achieve enhanced recognition accuracy. Experimental results on RadioML2016.10a, RadioML2016.10b, and RML22 datasets demonstrate that the Feature-Enhanced Transformer with skip-attention (FE-SKViT) excels over other methods, particularly under low SNR conditions ranging from −4 to 6 dB.
ISSN:2072-4292