Multi-representation domain attentive contrastive learning based unsupervised automatic modulation recognition
Abstract The rapid advancement of B5G/6G and wireless technologies, combined with rising end-user numbers, has intensified radio spectrum congestion. Automatic modulation recognition, crucial for spectrum sensing in cognitive radio, traditionally relies on supervised methods requiring extensive labe...
Saved in:
| Main Authors: | , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-07-01
|
| Series: | Nature Communications |
| Online Access: | https://doi.org/10.1038/s41467-025-60921-z |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849334463250563072 |
|---|---|
| author | Yu Li Xiaoran Shi Haoyue Tan Zhenxi Zhang Xinyao Yang Feng Zhou |
| author_facet | Yu Li Xiaoran Shi Haoyue Tan Zhenxi Zhang Xinyao Yang Feng Zhou |
| author_sort | Yu Li |
| collection | DOAJ |
| description | Abstract The rapid advancement of B5G/6G and wireless technologies, combined with rising end-user numbers, has intensified radio spectrum congestion. Automatic modulation recognition, crucial for spectrum sensing in cognitive radio, traditionally relies on supervised methods requiring extensive labeled data. However, acquiring reliable labels is challenging. Here, we propose an unsupervised framework, Multi-Representation Domain Attentive Contrastive Learning, which extracts high-quality signal features from unlabeled data via cross-domain contrastive learning. Inter-domain and intra-domain contrastive mechanisms enhance mutual modulation feature extraction across domains while preserving source domain self-information. The domain attention module dynamically selects representation domains at the feature level, improving adaptability. The experiments through public datasets show that the proposed method outperforms existing modulation recognition methods and can be extended to accommodate various representation domains. This study bridges the gap between unsupervised and supervised learning for radio signals, advancing Internet of Things and cognitive radio development. |
| format | Article |
| id | doaj-art-63de494f29fc499182da348f98ada050 |
| institution | Kabale University |
| issn | 2041-1723 |
| language | English |
| publishDate | 2025-07-01 |
| publisher | Nature Portfolio |
| record_format | Article |
| series | Nature Communications |
| spelling | doaj-art-63de494f29fc499182da348f98ada0502025-08-20T03:45:34ZengNature PortfolioNature Communications2041-17232025-07-0116111310.1038/s41467-025-60921-zMulti-representation domain attentive contrastive learning based unsupervised automatic modulation recognitionYu Li0Xiaoran Shi1Haoyue Tan2Zhenxi Zhang3Xinyao Yang4Feng Zhou5Key Laboratory of Electronic Information Countermeasure and Simulation Technology, School of Electronic Engineering, Xidian UniversityKey Laboratory of Electronic Information Countermeasure and Simulation Technology, School of Electronic Engineering, Xidian UniversityKey Laboratory of Electronic Information Countermeasure and Simulation Technology, School of Electronic Engineering, Xidian UniversityKey Laboratory of Electronic Information Countermeasure and Simulation Technology, School of Electronic Engineering, Xidian UniversityKey Laboratory of Electronic Information Countermeasure and Simulation Technology, School of Electronic Engineering, Xidian UniversityKey Laboratory of Electronic Information Countermeasure and Simulation Technology, School of Electronic Engineering, Xidian UniversityAbstract The rapid advancement of B5G/6G and wireless technologies, combined with rising end-user numbers, has intensified radio spectrum congestion. Automatic modulation recognition, crucial for spectrum sensing in cognitive radio, traditionally relies on supervised methods requiring extensive labeled data. However, acquiring reliable labels is challenging. Here, we propose an unsupervised framework, Multi-Representation Domain Attentive Contrastive Learning, which extracts high-quality signal features from unlabeled data via cross-domain contrastive learning. Inter-domain and intra-domain contrastive mechanisms enhance mutual modulation feature extraction across domains while preserving source domain self-information. The domain attention module dynamically selects representation domains at the feature level, improving adaptability. The experiments through public datasets show that the proposed method outperforms existing modulation recognition methods and can be extended to accommodate various representation domains. This study bridges the gap between unsupervised and supervised learning for radio signals, advancing Internet of Things and cognitive radio development.https://doi.org/10.1038/s41467-025-60921-z |
| spellingShingle | Yu Li Xiaoran Shi Haoyue Tan Zhenxi Zhang Xinyao Yang Feng Zhou Multi-representation domain attentive contrastive learning based unsupervised automatic modulation recognition Nature Communications |
| title | Multi-representation domain attentive contrastive learning based unsupervised automatic modulation recognition |
| title_full | Multi-representation domain attentive contrastive learning based unsupervised automatic modulation recognition |
| title_fullStr | Multi-representation domain attentive contrastive learning based unsupervised automatic modulation recognition |
| title_full_unstemmed | Multi-representation domain attentive contrastive learning based unsupervised automatic modulation recognition |
| title_short | Multi-representation domain attentive contrastive learning based unsupervised automatic modulation recognition |
| title_sort | multi representation domain attentive contrastive learning based unsupervised automatic modulation recognition |
| url | https://doi.org/10.1038/s41467-025-60921-z |
| work_keys_str_mv | AT yuli multirepresentationdomainattentivecontrastivelearningbasedunsupervisedautomaticmodulationrecognition AT xiaoranshi multirepresentationdomainattentivecontrastivelearningbasedunsupervisedautomaticmodulationrecognition AT haoyuetan multirepresentationdomainattentivecontrastivelearningbasedunsupervisedautomaticmodulationrecognition AT zhenxizhang multirepresentationdomainattentivecontrastivelearningbasedunsupervisedautomaticmodulationrecognition AT xinyaoyang multirepresentationdomainattentivecontrastivelearningbasedunsupervisedautomaticmodulationrecognition AT fengzhou multirepresentationdomainattentivecontrastivelearningbasedunsupervisedautomaticmodulationrecognition |