MCADFusion: a novel multi-scale convolutional attention decomposition method for enhanced infrared and visible light image fusion

This paper presents a method called MCADFusion, a feature decomposition technique specifically designed for the fusion of infrared and visible images, incorporating target radiance and detailed texture. MCADFusion employs an innovative two-branch architecture that effectively extracts and decomposes...

Full description

Saved in:
Bibliographic Details
Main Authors: Wangwei Zhang, Menghao Dai, Bin Zhou, Changhai Wang
Format: Article
Language:English
Published: AIMS Press 2024-08-01
Series:Electronic Research Archive
Subjects:
Online Access:https://www.aimspress.com/article/doi/10.3934/era.2024233
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832590822259818496
author Wangwei Zhang
Menghao Dai
Bin Zhou
Changhai Wang
author_facet Wangwei Zhang
Menghao Dai
Bin Zhou
Changhai Wang
author_sort Wangwei Zhang
collection DOAJ
description This paper presents a method called MCADFusion, a feature decomposition technique specifically designed for the fusion of infrared and visible images, incorporating target radiance and detailed texture. MCADFusion employs an innovative two-branch architecture that effectively extracts and decomposes both local and global features from different source images, thereby enhancing the processing of image feature information. The method begins with a multi-scale feature extraction module and a reconstructor module to obtain local and global feature information from rich source images. Subsequently, the local and global features of different source images are decomposed using the the channel attention module (CAM) and the spatial attention module (SAM). Feature fusion is then performed through a two-channel attention merging method. Finally, image reconstruction is achieved using the restormer module. During the training phase, MCADFusion employs a two-stage strategy to optimize the network parameters, resulting in high-quality fused images. Experimental results demonstrate that MCADFusion surpasses existing techniques in both subjective visual evaluation and objective assessment on publicly available TNO and MSRS datasets, underscoring its superiority.
format Article
id doaj-art-7979b402eea44899804da9040f20eeee
institution Kabale University
issn 2688-1594
language English
publishDate 2024-08-01
publisher AIMS Press
record_format Article
series Electronic Research Archive
spelling doaj-art-7979b402eea44899804da9040f20eeee2025-01-23T07:51:27ZengAIMS PressElectronic Research Archive2688-15942024-08-013285067508910.3934/era.2024233MCADFusion: a novel multi-scale convolutional attention decomposition method for enhanced infrared and visible light image fusionWangwei Zhang0Menghao Dai1Bin Zhou2Changhai Wang3Software Engineering College, Zhengzhou University of Light Industry, No.136 Science Road, Zhengzhou 450000, ChinaSoftware Engineering College, Zhengzhou University of Light Industry, No.136 Science Road, Zhengzhou 450000, ChinaElectronics and Electrical Engineering College, Zhengzhou University of Science and Technology, No.1 Xueyuan Road, Zhengzhou 450064, ChinaSoftware Engineering College, Zhengzhou University of Light Industry, No.136 Science Road, Zhengzhou 450000, ChinaThis paper presents a method called MCADFusion, a feature decomposition technique specifically designed for the fusion of infrared and visible images, incorporating target radiance and detailed texture. MCADFusion employs an innovative two-branch architecture that effectively extracts and decomposes both local and global features from different source images, thereby enhancing the processing of image feature information. The method begins with a multi-scale feature extraction module and a reconstructor module to obtain local and global feature information from rich source images. Subsequently, the local and global features of different source images are decomposed using the the channel attention module (CAM) and the spatial attention module (SAM). Feature fusion is then performed through a two-channel attention merging method. Finally, image reconstruction is achieved using the restormer module. During the training phase, MCADFusion employs a two-stage strategy to optimize the network parameters, resulting in high-quality fused images. Experimental results demonstrate that MCADFusion surpasses existing techniques in both subjective visual evaluation and objective assessment on publicly available TNO and MSRS datasets, underscoring its superiority.https://www.aimspress.com/article/doi/10.3934/era.2024233image fusionmulti-scaleconvolutional attention decompositionmodal specificityshared features
spellingShingle Wangwei Zhang
Menghao Dai
Bin Zhou
Changhai Wang
MCADFusion: a novel multi-scale convolutional attention decomposition method for enhanced infrared and visible light image fusion
Electronic Research Archive
image fusion
multi-scale
convolutional attention decomposition
modal specificity
shared features
title MCADFusion: a novel multi-scale convolutional attention decomposition method for enhanced infrared and visible light image fusion
title_full MCADFusion: a novel multi-scale convolutional attention decomposition method for enhanced infrared and visible light image fusion
title_fullStr MCADFusion: a novel multi-scale convolutional attention decomposition method for enhanced infrared and visible light image fusion
title_full_unstemmed MCADFusion: a novel multi-scale convolutional attention decomposition method for enhanced infrared and visible light image fusion
title_short MCADFusion: a novel multi-scale convolutional attention decomposition method for enhanced infrared and visible light image fusion
title_sort mcadfusion a novel multi scale convolutional attention decomposition method for enhanced infrared and visible light image fusion
topic image fusion
multi-scale
convolutional attention decomposition
modal specificity
shared features
url https://www.aimspress.com/article/doi/10.3934/era.2024233
work_keys_str_mv AT wangweizhang mcadfusionanovelmultiscaleconvolutionalattentiondecompositionmethodforenhancedinfraredandvisiblelightimagefusion
AT menghaodai mcadfusionanovelmultiscaleconvolutionalattentiondecompositionmethodforenhancedinfraredandvisiblelightimagefusion
AT binzhou mcadfusionanovelmultiscaleconvolutionalattentiondecompositionmethodforenhancedinfraredandvisiblelightimagefusion
AT changhaiwang mcadfusionanovelmultiscaleconvolutionalattentiondecompositionmethodforenhancedinfraredandvisiblelightimagefusion