Semantic Segmentation-Driven Knowledge Distillation-Based Infrared Visible Image Fusion Framework

The goal of infrared and visible image fusion is to generate a fused image that integrates both prominent targets and fine textures. However, many existing fusion algorithms overly emphasize visual quality and traditional statistical evaluation metrics while neglecting the requirements of real-world...

Full description

Saved in:
Bibliographic Details
Main Author: Xingshuo Wang
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10982250/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850271555752296448
author Xingshuo Wang
author_facet Xingshuo Wang
author_sort Xingshuo Wang
collection DOAJ
description The goal of infrared and visible image fusion is to generate a fused image that integrates both prominent targets and fine textures. However, many existing fusion algorithms overly emphasize visual quality and traditional statistical evaluation metrics while neglecting the requirements of real-world applications, especially in high-level vision tasks. To address this issue, this paper proposes a semantic segmentation-driven image fusion framework based on knowledge distillation. By incorporating a distributed structure of teacher and student networks, the framework leverages knowledge distillation to reduce network complexity, ensuring that the fused images are not only visually enhanced but also well-suited for downstream high-level vision tasks. Additionally, the introduction of two discriminators further optimizes the overall quality of the fused images, while the integration of a semantic segmentation module ensures that the fused images provide valuable support for advanced vision tasks. To enhance both fusion performance and segmentation capability, this paper proposes a joint training strategy that enables the fusion and segmentation networks to mutually improve during training. Experimental results on three public datasets demonstrate that the proposed method outperforms nine state-of-the-art fusion approaches in terms of visual quality, evaluation metrics, and semantic segmentation performance. Finally, ablation studies on the segmentation network further validate the effectiveness of the proposed method.
format Article
id doaj-art-6334c1e161884ecb9ab37c09de79fa4a
institution OA Journals
issn 2169-3536
language English
publishDate 2025-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-6334c1e161884ecb9ab37c09de79fa4a2025-08-20T01:52:12ZengIEEEIEEE Access2169-35362025-01-0113834088342510.1109/ACCESS.2025.356643610982250Semantic Segmentation-Driven Knowledge Distillation-Based Infrared Visible Image Fusion FrameworkXingshuo Wang0https://orcid.org/0009-0007-8923-280XSchool of Information Science and Engineering, Shandong Normal University, Jinan, Shandong, ChinaThe goal of infrared and visible image fusion is to generate a fused image that integrates both prominent targets and fine textures. However, many existing fusion algorithms overly emphasize visual quality and traditional statistical evaluation metrics while neglecting the requirements of real-world applications, especially in high-level vision tasks. To address this issue, this paper proposes a semantic segmentation-driven image fusion framework based on knowledge distillation. By incorporating a distributed structure of teacher and student networks, the framework leverages knowledge distillation to reduce network complexity, ensuring that the fused images are not only visually enhanced but also well-suited for downstream high-level vision tasks. Additionally, the introduction of two discriminators further optimizes the overall quality of the fused images, while the integration of a semantic segmentation module ensures that the fused images provide valuable support for advanced vision tasks. To enhance both fusion performance and segmentation capability, this paper proposes a joint training strategy that enables the fusion and segmentation networks to mutually improve during training. Experimental results on three public datasets demonstrate that the proposed method outperforms nine state-of-the-art fusion approaches in terms of visual quality, evaluation metrics, and semantic segmentation performance. Finally, ablation studies on the segmentation network further validate the effectiveness of the proposed method.https://ieeexplore.ieee.org/document/10982250/Infrared and visible image fusionknowledge distillationdual discriminatorssemantic segmentation loss
spellingShingle Xingshuo Wang
Semantic Segmentation-Driven Knowledge Distillation-Based Infrared Visible Image Fusion Framework
IEEE Access
Infrared and visible image fusion
knowledge distillation
dual discriminators
semantic segmentation loss
title Semantic Segmentation-Driven Knowledge Distillation-Based Infrared Visible Image Fusion Framework
title_full Semantic Segmentation-Driven Knowledge Distillation-Based Infrared Visible Image Fusion Framework
title_fullStr Semantic Segmentation-Driven Knowledge Distillation-Based Infrared Visible Image Fusion Framework
title_full_unstemmed Semantic Segmentation-Driven Knowledge Distillation-Based Infrared Visible Image Fusion Framework
title_short Semantic Segmentation-Driven Knowledge Distillation-Based Infrared Visible Image Fusion Framework
title_sort semantic segmentation driven knowledge distillation based infrared visible image fusion framework
topic Infrared and visible image fusion
knowledge distillation
dual discriminators
semantic segmentation loss
url https://ieeexplore.ieee.org/document/10982250/
work_keys_str_mv AT xingshuowang semanticsegmentationdrivenknowledgedistillationbasedinfraredvisibleimagefusionframework