Texture-preserving and information loss minimization method for infrared and visible image fusion
Abstract In the task of infrared and visible image fusion, achieving high-quality fusion results typically requires preserving detailed texture and minimizing information loss, while maintaining high contrast and clear edges; however, existing methods often struggle to balance these objectives, lead...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-07-01
|
| Series: | Scientific Reports |
| Subjects: | |
| Online Access: | https://doi.org/10.1038/s41598-025-11482-0 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Abstract In the task of infrared and visible image fusion, achieving high-quality fusion results typically requires preserving detailed texture and minimizing information loss, while maintaining high contrast and clear edges; however, existing methods often struggle to balance these objectives, leading to texture degradation and information loss during the fusion process. To address these challenges, we propose TPFusion, a texture-preserving and information loss minimization method for infrared and visible image fusion. TPFusion consists of the following key components: a multi-scale feature extraction module for enhancing the capability of capturing features; a texture enhancement module and contrast enhancement module, which helps to preserve fine-grained textures and extract salient contours and contrast information; a dual-attention fusion module for fusing the features extracted from the source images; an information content based loss function minimizing the feature discrepancy between the fused images and the source images and effectively reducing the information loss. Extensive evaluations demonstrate that TPFusion achieves superior fusion performance. Across three datasets, TPFusion delivers the best results: on the TNO dataset, it raises AG by 2.69% and QAB/F by 0.75%; on the MSRS dataset, it lift AG by 9.99% and CC by 9.46%; and on the M3FD it boosts SCD by 1.58% and EN by 2.93% over the second best method. In downstream tasks, TPFusion attains the highest mean average precision on object detection achieves the second-highest accuracy on semantic segmentation. |
|---|---|
| ISSN: | 2045-2322 |