Diffused Multi-scale Generative Adversarial Network for low-dose PET images reconstruction

Abstract Purpose The aim of this study is to convert low-dose PET (L-PET) images to full-dose PET (F-PET) images based on our Diffused Multi-scale Generative Adversarial Network (DMGAN) to offer a potential balance between reducing radiation exposure and maintaining diagnostic performance. Methods T...

Full description

Saved in:
Bibliographic Details
Main Authors: Xiang Yu, Daoyan Hu, Qiong Yao, Yu Fu, Yan Zhong, Jing Wang, Mei Tian, Hong Zhang
Format: Article
Language:English
Published: BMC 2025-02-01
Series:BioMedical Engineering OnLine
Subjects:
Online Access:https://doi.org/10.1186/s12938-025-01348-x
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1823861707379834880
author Xiang Yu
Daoyan Hu
Qiong Yao
Yu Fu
Yan Zhong
Jing Wang
Mei Tian
Hong Zhang
author_facet Xiang Yu
Daoyan Hu
Qiong Yao
Yu Fu
Yan Zhong
Jing Wang
Mei Tian
Hong Zhang
author_sort Xiang Yu
collection DOAJ
description Abstract Purpose The aim of this study is to convert low-dose PET (L-PET) images to full-dose PET (F-PET) images based on our Diffused Multi-scale Generative Adversarial Network (DMGAN) to offer a potential balance between reducing radiation exposure and maintaining diagnostic performance. Methods The proposed method includes two modules: the diffusion generator and the u-net discriminator. The goal of the first module is to get different information from different levels, enhancing the generalization ability of the generator to the image and improving the stability of the training. Generated images are inputted into the u-net discriminator, extracting details from both overall and specific perspectives to enhance the quality of the generated F-PET images. We conducted evaluations encompassing both qualitative assessments and quantitative measures. In terms of quantitative comparisons, we employed two metrics, structure similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) to evaluate the performance of diverse methods. Results Our proposed method achieved the highest PSNR and SSIM scores among the compared methods, which improved PSNR by at least 6.2% compared to the other methods. Compared to other methods, the synthesized full-dose PET image generated by our method exhibits a more accurate voxel-wise metabolic intensity distribution, resulting in a clearer depiction of the epilepsy focus. Conclusions The proposed method demonstrates improved restoration of original details from low-dose PET images compared to other models trained on the same datasets. This method offers a potential balance between minimizing radiation exposure and preserving diagnostic performance.
format Article
id doaj-art-2b6da8d81a9340d8959bd00fda7e5648
institution Kabale University
issn 1475-925X
language English
publishDate 2025-02-01
publisher BMC
record_format Article
series BioMedical Engineering OnLine
spelling doaj-art-2b6da8d81a9340d8959bd00fda7e56482025-02-09T12:47:33ZengBMCBioMedical Engineering OnLine1475-925X2025-02-0124111610.1186/s12938-025-01348-xDiffused Multi-scale Generative Adversarial Network for low-dose PET images reconstructionXiang Yu0Daoyan Hu1Qiong Yao2Yu Fu3Yan Zhong4Jing Wang5Mei Tian6Hong Zhang7Polytechnic Institute, Zhejiang UniversityThe College of Biomedical Engineering and Instrument Science of Zhejiang UniversityDepartment of Nuclear Medicine and Medical PET Center, The Second Affiliated Hospital of Zhejiang University School of MedicineCollege of Information Science and Electronic Engineering, Zhejiang UniversityDepartment of Nuclear Medicine and Medical PET Center, The Second Affiliated Hospital of Zhejiang University School of MedicineDepartment of Nuclear Medicine and Medical PET Center, The Second Affiliated Hospital of Zhejiang University School of MedicineHuman Phenome Institute, Fudan UniversityThe College of Biomedical Engineering and Instrument Science of Zhejiang UniversityAbstract Purpose The aim of this study is to convert low-dose PET (L-PET) images to full-dose PET (F-PET) images based on our Diffused Multi-scale Generative Adversarial Network (DMGAN) to offer a potential balance between reducing radiation exposure and maintaining diagnostic performance. Methods The proposed method includes two modules: the diffusion generator and the u-net discriminator. The goal of the first module is to get different information from different levels, enhancing the generalization ability of the generator to the image and improving the stability of the training. Generated images are inputted into the u-net discriminator, extracting details from both overall and specific perspectives to enhance the quality of the generated F-PET images. We conducted evaluations encompassing both qualitative assessments and quantitative measures. In terms of quantitative comparisons, we employed two metrics, structure similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) to evaluate the performance of diverse methods. Results Our proposed method achieved the highest PSNR and SSIM scores among the compared methods, which improved PSNR by at least 6.2% compared to the other methods. Compared to other methods, the synthesized full-dose PET image generated by our method exhibits a more accurate voxel-wise metabolic intensity distribution, resulting in a clearer depiction of the epilepsy focus. Conclusions The proposed method demonstrates improved restoration of original details from low-dose PET images compared to other models trained on the same datasets. This method offers a potential balance between minimizing radiation exposure and preserving diagnostic performance.https://doi.org/10.1186/s12938-025-01348-xPositron emission tomographyDeep learningImage reconstructionLow-dose PET
spellingShingle Xiang Yu
Daoyan Hu
Qiong Yao
Yu Fu
Yan Zhong
Jing Wang
Mei Tian
Hong Zhang
Diffused Multi-scale Generative Adversarial Network for low-dose PET images reconstruction
BioMedical Engineering OnLine
Positron emission tomography
Deep learning
Image reconstruction
Low-dose PET
title Diffused Multi-scale Generative Adversarial Network for low-dose PET images reconstruction
title_full Diffused Multi-scale Generative Adversarial Network for low-dose PET images reconstruction
title_fullStr Diffused Multi-scale Generative Adversarial Network for low-dose PET images reconstruction
title_full_unstemmed Diffused Multi-scale Generative Adversarial Network for low-dose PET images reconstruction
title_short Diffused Multi-scale Generative Adversarial Network for low-dose PET images reconstruction
title_sort diffused multi scale generative adversarial network for low dose pet images reconstruction
topic Positron emission tomography
Deep learning
Image reconstruction
Low-dose PET
url https://doi.org/10.1186/s12938-025-01348-x
work_keys_str_mv AT xiangyu diffusedmultiscalegenerativeadversarialnetworkforlowdosepetimagesreconstruction
AT daoyanhu diffusedmultiscalegenerativeadversarialnetworkforlowdosepetimagesreconstruction
AT qiongyao diffusedmultiscalegenerativeadversarialnetworkforlowdosepetimagesreconstruction
AT yufu diffusedmultiscalegenerativeadversarialnetworkforlowdosepetimagesreconstruction
AT yanzhong diffusedmultiscalegenerativeadversarialnetworkforlowdosepetimagesreconstruction
AT jingwang diffusedmultiscalegenerativeadversarialnetworkforlowdosepetimagesreconstruction
AT meitian diffusedmultiscalegenerativeadversarialnetworkforlowdosepetimagesreconstruction
AT hongzhang diffusedmultiscalegenerativeadversarialnetworkforlowdosepetimagesreconstruction