A Composite Recognition Method Based on Multimode Mutual Attention Fusion Network

To address the problem of single-mode vulnerability to complex environments, a multimode fusion network with mutual attention is proposed. This network combines the use of laser, infrared and millimeter wave modalities to leverage the advantages of each mode in different environments, increasing the...

Full description

Saved in:
Bibliographic Details
Main Authors: Xing Ding, Xiangrong Zhang, Chao Liang, Bo Liu, Lanjie Niu
Format: Article
Language:English
Published: Taylor & Francis Group 2025-12-01
Series:Applied Artificial Intelligence
Online Access:https://www.tandfonline.com/doi/10.1080/08839514.2025.2462371
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832540881536679936
author Xing Ding
Xiangrong Zhang
Chao Liang
Bo Liu
Lanjie Niu
author_facet Xing Ding
Xiangrong Zhang
Chao Liang
Bo Liu
Lanjie Niu
author_sort Xing Ding
collection DOAJ
description To address the problem of single-mode vulnerability to complex environments, a multimode fusion network with mutual attention is proposed. This network combines the use of laser, infrared and millimeter wave modalities to leverage the advantages of each mode in different environments, increasing the network’s resilience to interference. The study begins with the construction of pixel-level fusion networks, feature-weighted fusion networks and the multimode mutual attention fusion network. A comprehensive introduction to the multimode mutual attention fusion network is given, as well as a comparison with the other two networks. The model is then trained and evaluated using data from glide rocket and drone experiments. Finally, an analysis of the anti-outlier interference capability of the multimode fusion network with mutual attention is carried out. The test results show that the multimode mutual attention fusion network containing a feature fusion attention mechanism has the highest detection performance and anti-interference ability. Without interference, the network achieves a remarkable accuracy of 0.98 for multi-target recognition. In addition, with an accuracy of 0.96, it ensures a high level of stability in various interference environments. In addition, the introduction of multi-scale fusion has improved the rocket’s speed adaptability by about 75%.
format Article
id doaj-art-1e971da7562f4c35ac778429c0ff2293
institution Kabale University
issn 0883-9514
1087-6545
language English
publishDate 2025-12-01
publisher Taylor & Francis Group
record_format Article
series Applied Artificial Intelligence
spelling doaj-art-1e971da7562f4c35ac778429c0ff22932025-02-04T13:17:23ZengTaylor & Francis GroupApplied Artificial Intelligence0883-95141087-65452025-12-0139110.1080/08839514.2025.2462371A Composite Recognition Method Based on Multimode Mutual Attention Fusion NetworkXing Ding0Xiangrong Zhang1Chao Liang2Bo Liu3Lanjie Niu4School of Artificial Intelligence, Xidian University, Xi’an, ChinaSchool of Artificial Intelligence, Xidian University, Xi’an, ChinaSchool of Artificial Intelligence, Xidian University, Xi’an, ChinaSchool of Artificial Intelligence, Xidian University, Xi’an, ChinaKey Laboratory of Defense Science and Technology for Dynamic Characteristics of Fuze, Xi’an Electromechanical Information Technology Research Institute, Xi’an, ChinaTo address the problem of single-mode vulnerability to complex environments, a multimode fusion network with mutual attention is proposed. This network combines the use of laser, infrared and millimeter wave modalities to leverage the advantages of each mode in different environments, increasing the network’s resilience to interference. The study begins with the construction of pixel-level fusion networks, feature-weighted fusion networks and the multimode mutual attention fusion network. A comprehensive introduction to the multimode mutual attention fusion network is given, as well as a comparison with the other two networks. The model is then trained and evaluated using data from glide rocket and drone experiments. Finally, an analysis of the anti-outlier interference capability of the multimode fusion network with mutual attention is carried out. The test results show that the multimode mutual attention fusion network containing a feature fusion attention mechanism has the highest detection performance and anti-interference ability. Without interference, the network achieves a remarkable accuracy of 0.98 for multi-target recognition. In addition, with an accuracy of 0.96, it ensures a high level of stability in various interference environments. In addition, the introduction of multi-scale fusion has improved the rocket’s speed adaptability by about 75%.https://www.tandfonline.com/doi/10.1080/08839514.2025.2462371
spellingShingle Xing Ding
Xiangrong Zhang
Chao Liang
Bo Liu
Lanjie Niu
A Composite Recognition Method Based on Multimode Mutual Attention Fusion Network
Applied Artificial Intelligence
title A Composite Recognition Method Based on Multimode Mutual Attention Fusion Network
title_full A Composite Recognition Method Based on Multimode Mutual Attention Fusion Network
title_fullStr A Composite Recognition Method Based on Multimode Mutual Attention Fusion Network
title_full_unstemmed A Composite Recognition Method Based on Multimode Mutual Attention Fusion Network
title_short A Composite Recognition Method Based on Multimode Mutual Attention Fusion Network
title_sort composite recognition method based on multimode mutual attention fusion network
url https://www.tandfonline.com/doi/10.1080/08839514.2025.2462371
work_keys_str_mv AT xingding acompositerecognitionmethodbasedonmultimodemutualattentionfusionnetwork
AT xiangrongzhang acompositerecognitionmethodbasedonmultimodemutualattentionfusionnetwork
AT chaoliang acompositerecognitionmethodbasedonmultimodemutualattentionfusionnetwork
AT boliu acompositerecognitionmethodbasedonmultimodemutualattentionfusionnetwork
AT lanjieniu acompositerecognitionmethodbasedonmultimodemutualattentionfusionnetwork
AT xingding compositerecognitionmethodbasedonmultimodemutualattentionfusionnetwork
AT xiangrongzhang compositerecognitionmethodbasedonmultimodemutualattentionfusionnetwork
AT chaoliang compositerecognitionmethodbasedonmultimodemutualattentionfusionnetwork
AT boliu compositerecognitionmethodbasedonmultimodemutualattentionfusionnetwork
AT lanjieniu compositerecognitionmethodbasedonmultimodemutualattentionfusionnetwork