Development and validation of 3D super-resolution convolutional neural network for 18F-FDG-PET images

Abstract Background Positron emission tomography (PET) is a valuable tool for cancer diagnosis but generally has a lower spatial resolution compared to computed tomography (CT) or magnetic resonance imaging (MRI). High-resolution PET scanners that use silicon photomultipliers and time-of-flight meas...

Full description

Saved in:
Bibliographic Details
Main Authors: Hiroki Endo, Kenji Hirata, Keiichi Magota, Takaaki Yoshimura, Chietsugu Katoh, Kohsuke Kudo
Format: Article
Language:English
Published: SpringerOpen 2025-08-01
Series:EJNMMI Physics
Subjects:
Online Access:https://doi.org/10.1186/s40658-025-00791-y
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849225928863907840
author Hiroki Endo
Kenji Hirata
Keiichi Magota
Takaaki Yoshimura
Chietsugu Katoh
Kohsuke Kudo
author_facet Hiroki Endo
Kenji Hirata
Keiichi Magota
Takaaki Yoshimura
Chietsugu Katoh
Kohsuke Kudo
author_sort Hiroki Endo
collection DOAJ
description Abstract Background Positron emission tomography (PET) is a valuable tool for cancer diagnosis but generally has a lower spatial resolution compared to computed tomography (CT) or magnetic resonance imaging (MRI). High-resolution PET scanners that use silicon photomultipliers and time-of-flight measurements are expensive. Therefore, cost-effective software-based super-resolution methods are required. This study proposes a novel approach for enhancing whole-body PET image resolution applying a 2.5-dimensional Super-Resolution Convolutional Neural Network (2.5D-SRCNN) combined with logarithmic transformation preprocessing. This method aims to improve image quality and maintain quantitative accuracy, particularly for standardized uptake value measurements, while addressing the challenges of providing a memory-efficient alternative to full three-dimensional processing and managing the wide dynamic range of tracer uptake in PET images. We analyzed data from 90 patients who underwent whole-body FDG-PET/CT examinations and reconstructed low-resolution slices with a voxel size of 4 × 4 × 4 mm and corresponding high-resolution (HR) slices with a voxel size of 2 × 2 × 2 mm. The proposed 2.5D-SRCNN model, based on the conventional 2D-SRCNN structure, incorporates information from adjacent slices to generate a high-resolution output. Logarithmic transformation of the voxel values was applied to manage the large dynamic range caused by physiological tracer accumulation in the bladder. Performance was assessed using the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). The quantitative accuracy of standardized uptake values (SUV) was validated using a phantom study. Results The results demonstrated that the 2.5D-SRCNN with logarithmic transformation significantly outperformed the conventional 2D-SRCNN in terms of PSNR and SSIM (p < 0.0001). The proposed method also showed an improved depiction of small spheres in the phantom while maintaining the accuracy of the SUV. Conclusions Our proposed method for whole-body PET images using a super-resolution model with the 2.5D approach and logarithmic transformation may be effective in generating super-resolution images with a lower spatial error and better quantitative accuracy.
format Article
id doaj-art-a0ddbb1c499044fc995dbd8687e86742
institution Kabale University
issn 2197-7364
language English
publishDate 2025-08-01
publisher SpringerOpen
record_format Article
series EJNMMI Physics
spelling doaj-art-a0ddbb1c499044fc995dbd8687e867422025-08-24T11:51:02ZengSpringerOpenEJNMMI Physics2197-73642025-08-0112112010.1186/s40658-025-00791-yDevelopment and validation of 3D super-resolution convolutional neural network for 18F-FDG-PET imagesHiroki Endo0Kenji Hirata1Keiichi Magota2Takaaki Yoshimura3Chietsugu Katoh4Kohsuke Kudo5Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido UniversityDepartment of Diagnostic Imaging, Faculty of Medicine, Hokkaido UniversityDivision of Medical Imaging and Technology, Hokkaido University HospitalDivision of Medical AI Education and Research, Hokkaido UniversityDepartment of Nuclear Medicine, Hokkaido University HospitalDepartment of Diagnostic Imaging, Faculty of Medicine, Hokkaido UniversityAbstract Background Positron emission tomography (PET) is a valuable tool for cancer diagnosis but generally has a lower spatial resolution compared to computed tomography (CT) or magnetic resonance imaging (MRI). High-resolution PET scanners that use silicon photomultipliers and time-of-flight measurements are expensive. Therefore, cost-effective software-based super-resolution methods are required. This study proposes a novel approach for enhancing whole-body PET image resolution applying a 2.5-dimensional Super-Resolution Convolutional Neural Network (2.5D-SRCNN) combined with logarithmic transformation preprocessing. This method aims to improve image quality and maintain quantitative accuracy, particularly for standardized uptake value measurements, while addressing the challenges of providing a memory-efficient alternative to full three-dimensional processing and managing the wide dynamic range of tracer uptake in PET images. We analyzed data from 90 patients who underwent whole-body FDG-PET/CT examinations and reconstructed low-resolution slices with a voxel size of 4 × 4 × 4 mm and corresponding high-resolution (HR) slices with a voxel size of 2 × 2 × 2 mm. The proposed 2.5D-SRCNN model, based on the conventional 2D-SRCNN structure, incorporates information from adjacent slices to generate a high-resolution output. Logarithmic transformation of the voxel values was applied to manage the large dynamic range caused by physiological tracer accumulation in the bladder. Performance was assessed using the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). The quantitative accuracy of standardized uptake values (SUV) was validated using a phantom study. Results The results demonstrated that the 2.5D-SRCNN with logarithmic transformation significantly outperformed the conventional 2D-SRCNN in terms of PSNR and SSIM (p < 0.0001). The proposed method also showed an improved depiction of small spheres in the phantom while maintaining the accuracy of the SUV. Conclusions Our proposed method for whole-body PET images using a super-resolution model with the 2.5D approach and logarithmic transformation may be effective in generating super-resolution images with a lower spatial error and better quantitative accuracy.https://doi.org/10.1186/s40658-025-00791-yPETSuper-resolutionDeep learningConvolutional neural networkSiPM-PET/CT
spellingShingle Hiroki Endo
Kenji Hirata
Keiichi Magota
Takaaki Yoshimura
Chietsugu Katoh
Kohsuke Kudo
Development and validation of 3D super-resolution convolutional neural network for 18F-FDG-PET images
EJNMMI Physics
PET
Super-resolution
Deep learning
Convolutional neural network
SiPM-PET/CT
title Development and validation of 3D super-resolution convolutional neural network for 18F-FDG-PET images
title_full Development and validation of 3D super-resolution convolutional neural network for 18F-FDG-PET images
title_fullStr Development and validation of 3D super-resolution convolutional neural network for 18F-FDG-PET images
title_full_unstemmed Development and validation of 3D super-resolution convolutional neural network for 18F-FDG-PET images
title_short Development and validation of 3D super-resolution convolutional neural network for 18F-FDG-PET images
title_sort development and validation of 3d super resolution convolutional neural network for 18f fdg pet images
topic PET
Super-resolution
Deep learning
Convolutional neural network
SiPM-PET/CT
url https://doi.org/10.1186/s40658-025-00791-y
work_keys_str_mv AT hirokiendo developmentandvalidationof3dsuperresolutionconvolutionalneuralnetworkfor18ffdgpetimages
AT kenjihirata developmentandvalidationof3dsuperresolutionconvolutionalneuralnetworkfor18ffdgpetimages
AT keiichimagota developmentandvalidationof3dsuperresolutionconvolutionalneuralnetworkfor18ffdgpetimages
AT takaakiyoshimura developmentandvalidationof3dsuperresolutionconvolutionalneuralnetworkfor18ffdgpetimages
AT chietsugukatoh developmentandvalidationof3dsuperresolutionconvolutionalneuralnetworkfor18ffdgpetimages
AT kohsukekudo developmentandvalidationof3dsuperresolutionconvolutionalneuralnetworkfor18ffdgpetimages