Model-based Bayesian Fusion-Net for infrared and visible image fusion
Abstract Infrared and visible image fusion aims to generate fused images that maintain the advantages of each source such as temperature information and detailed textures. This paper presents Bayesian Model-based Fusion-Net, a novel approach for infrared and visible image fusion. By formulating imag...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
SpringerOpen
2025-08-01
|
| Series: | EURASIP Journal on Image and Video Processing |
| Subjects: | |
| Online Access: | https://doi.org/10.1186/s13640-025-00680-5 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849235071724158976 |
|---|---|
| author | Wang Li Kuang Yafang Cai Ziyi Chu Ning Mohammad-Djafari Ali |
| author_facet | Wang Li Kuang Yafang Cai Ziyi Chu Ning Mohammad-Djafari Ali |
| author_sort | Wang Li |
| collection | DOAJ |
| description | Abstract Infrared and visible image fusion aims to generate fused images that maintain the advantages of each source such as temperature information and detailed textures. This paper presents Bayesian Model-based Fusion-Net, a novel approach for infrared and visible image fusion. By formulating image fusion as an inverse problem within a hierarchical Bayesian framework, our method leverages physical priors and data-driven techniques to enhance model interpretability and transferability. Compared to traditional and deep learning-based fusion methods, the proposed Bayesian Model-based Fusion-Net achieves promising performance with significantly reduced computational complexity (0.07G FLOPs). Extensive experiments on multiple datasets, including industrial public dataset, demonstrate the effectiveness of the proposed method in preserving texture details, maintaining structural integrity, and enhancing feature clarity. Furthermore, our approach exhibits robustness when trained with limited data, maintaining consistent performance even when using only 10% of the training dataset. These characteristics make the proposed Bayesian Fusion-Net particularly suitable for industrial monitoring applications where computational resources and the amount of training dataset are limited. |
| format | Article |
| id | doaj-art-4b75228005ea4bdab02a884b9cbfa50e |
| institution | Kabale University |
| issn | 1687-5281 |
| language | English |
| publishDate | 2025-08-01 |
| publisher | SpringerOpen |
| record_format | Article |
| series | EURASIP Journal on Image and Video Processing |
| spelling | doaj-art-4b75228005ea4bdab02a884b9cbfa50e2025-08-20T04:02:55ZengSpringerOpenEURASIP Journal on Image and Video Processing1687-52812025-08-012025112510.1186/s13640-025-00680-5Model-based Bayesian Fusion-Net for infrared and visible image fusionWang Li0Kuang Yafang1Cai Ziyi2Chu Ning3Mohammad-Djafari Ali4School of Mathematics and Statistics, Central South UniversitySchool of Mathematics and Statistics, Central South UniversitySchool of Mathematics and Statistics, Central South UniversityNingbo Institute of Digital Twin (IDT), Ningbo Eastern Institute of TechnologyNingbo Institute of Digital Twin (IDT), Ningbo Eastern Institute of TechnologyAbstract Infrared and visible image fusion aims to generate fused images that maintain the advantages of each source such as temperature information and detailed textures. This paper presents Bayesian Model-based Fusion-Net, a novel approach for infrared and visible image fusion. By formulating image fusion as an inverse problem within a hierarchical Bayesian framework, our method leverages physical priors and data-driven techniques to enhance model interpretability and transferability. Compared to traditional and deep learning-based fusion methods, the proposed Bayesian Model-based Fusion-Net achieves promising performance with significantly reduced computational complexity (0.07G FLOPs). Extensive experiments on multiple datasets, including industrial public dataset, demonstrate the effectiveness of the proposed method in preserving texture details, maintaining structural integrity, and enhancing feature clarity. Furthermore, our approach exhibits robustness when trained with limited data, maintaining consistent performance even when using only 10% of the training dataset. These characteristics make the proposed Bayesian Fusion-Net particularly suitable for industrial monitoring applications where computational resources and the amount of training dataset are limited.https://doi.org/10.1186/s13640-025-00680-5Infrared and visible image fusionBayesian inferenceDeep unfoldingIndustrial abnormal detection |
| spellingShingle | Wang Li Kuang Yafang Cai Ziyi Chu Ning Mohammad-Djafari Ali Model-based Bayesian Fusion-Net for infrared and visible image fusion EURASIP Journal on Image and Video Processing Infrared and visible image fusion Bayesian inference Deep unfolding Industrial abnormal detection |
| title | Model-based Bayesian Fusion-Net for infrared and visible image fusion |
| title_full | Model-based Bayesian Fusion-Net for infrared and visible image fusion |
| title_fullStr | Model-based Bayesian Fusion-Net for infrared and visible image fusion |
| title_full_unstemmed | Model-based Bayesian Fusion-Net for infrared and visible image fusion |
| title_short | Model-based Bayesian Fusion-Net for infrared and visible image fusion |
| title_sort | model based bayesian fusion net for infrared and visible image fusion |
| topic | Infrared and visible image fusion Bayesian inference Deep unfolding Industrial abnormal detection |
| url | https://doi.org/10.1186/s13640-025-00680-5 |
| work_keys_str_mv | AT wangli modelbasedbayesianfusionnetforinfraredandvisibleimagefusion AT kuangyafang modelbasedbayesianfusionnetforinfraredandvisibleimagefusion AT caiziyi modelbasedbayesianfusionnetforinfraredandvisibleimagefusion AT chuning modelbasedbayesianfusionnetforinfraredandvisibleimagefusion AT mohammaddjafariali modelbasedbayesianfusionnetforinfraredandvisibleimagefusion |