Explainable Feature-Injected Diffusion Model for Medical Image Translation

The integration of computed tomography (CT) and magnetic resonance (MR) imaging is crucial for accurate medical diagnosis and treatment planning. However, translating images between CT and MR remains challenging due to significant differences in imaging modalities. To address this problem, we propos...

Full description

Saved in:
Bibliographic Details
Main Authors: Jung Su Ahn, Ki Hoon Kwak, Young-Rae Cho
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10945355/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850271861873573888
author Jung Su Ahn
Ki Hoon Kwak
Young-Rae Cho
author_facet Jung Su Ahn
Ki Hoon Kwak
Young-Rae Cho
author_sort Jung Su Ahn
collection DOAJ
description The integration of computed tomography (CT) and magnetic resonance (MR) imaging is crucial for accurate medical diagnosis and treatment planning. However, translating images between CT and MR remains challenging due to significant differences in imaging modalities. To address this problem, we propose an Explainable Feature-Injected Diffusion Model (EIDM) for unsupervised CT-to-MR image translation. EIDM comprises a feature synthesis module and a diffusion-based latent space learning framework. This model captures frequency representations of the original CT images using the Fast Fourier Transform and applies high-pass filters to restore anatomical structures lost during diffusion. It also integrates weighted heatmaps generated by explainable AI models and utilizes a cross-attention mechanism to achieve unbiased image synthesis. We quantitatively evaluated EIDM and recent approaches using four metrics for comparison. Experimental results demonstrate that EIDM outperforms latest Generative Adversarial Networks (GANs) and diffusion models, generating realistic MR images that preserve anatomical integrity, as evidenced by enhanced scores across evaluation metrics. This work highlights the effectiveness of jointly learning explainable features and contour regions in achieving the goal of translating CT to MR images.
format Article
id doaj-art-ca755c6a89b14eaeab0f45cf7ff47ace
institution OA Journals
issn 2169-3536
language English
publishDate 2025-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-ca755c6a89b14eaeab0f45cf7ff47ace2025-08-20T01:52:03ZengIEEEIEEE Access2169-35362025-01-0113572555726510.1109/ACCESS.2025.355558510945355Explainable Feature-Injected Diffusion Model for Medical Image TranslationJung Su Ahn0https://orcid.org/0009-0006-5791-3136Ki Hoon Kwak1Young-Rae Cho2https://orcid.org/0000-0002-4645-2542Division of Software, Yonsei University, Mirae Campus, Wonju-si, Gangwon-do, Republic of KoreaDivision of Software, Yonsei University, Mirae Campus, Wonju-si, Gangwon-do, Republic of KoreaDivision of Software, Yonsei University, Mirae Campus, Wonju-si, Gangwon-do, Republic of KoreaThe integration of computed tomography (CT) and magnetic resonance (MR) imaging is crucial for accurate medical diagnosis and treatment planning. However, translating images between CT and MR remains challenging due to significant differences in imaging modalities. To address this problem, we propose an Explainable Feature-Injected Diffusion Model (EIDM) for unsupervised CT-to-MR image translation. EIDM comprises a feature synthesis module and a diffusion-based latent space learning framework. This model captures frequency representations of the original CT images using the Fast Fourier Transform and applies high-pass filters to restore anatomical structures lost during diffusion. It also integrates weighted heatmaps generated by explainable AI models and utilizes a cross-attention mechanism to achieve unbiased image synthesis. We quantitatively evaluated EIDM and recent approaches using four metrics for comparison. Experimental results demonstrate that EIDM outperforms latest Generative Adversarial Networks (GANs) and diffusion models, generating realistic MR images that preserve anatomical integrity, as evidenced by enhanced scores across evaluation metrics. This work highlights the effectiveness of jointly learning explainable features and contour regions in achieving the goal of translating CT to MR images.https://ieeexplore.ieee.org/document/10945355/Medical imagesimage translationdiffusioncross-attentionfeature synthesis
spellingShingle Jung Su Ahn
Ki Hoon Kwak
Young-Rae Cho
Explainable Feature-Injected Diffusion Model for Medical Image Translation
IEEE Access
Medical images
image translation
diffusion
cross-attention
feature synthesis
title Explainable Feature-Injected Diffusion Model for Medical Image Translation
title_full Explainable Feature-Injected Diffusion Model for Medical Image Translation
title_fullStr Explainable Feature-Injected Diffusion Model for Medical Image Translation
title_full_unstemmed Explainable Feature-Injected Diffusion Model for Medical Image Translation
title_short Explainable Feature-Injected Diffusion Model for Medical Image Translation
title_sort explainable feature injected diffusion model for medical image translation
topic Medical images
image translation
diffusion
cross-attention
feature synthesis
url https://ieeexplore.ieee.org/document/10945355/
work_keys_str_mv AT jungsuahn explainablefeatureinjecteddiffusionmodelformedicalimagetranslation
AT kihoonkwak explainablefeatureinjecteddiffusionmodelformedicalimagetranslation
AT youngraecho explainablefeatureinjecteddiffusionmodelformedicalimagetranslation