Retinal-ESRGAN: A Hybrid GAN Model Approach for Retinal Image Super-Resolution Coupled With Reduced Training Time and Computational Resources for Improved Diagnostic Accuracy
Medical Image Super-Resolution has always been a subject of interest in medical image processing. However, super-resolved retinal images are a requisite tool for doctors to properly diagnose and treat ophthalmic diseases. The acquisition of high-quality images is challenging owing to several factors...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10935353/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Medical Image Super-Resolution has always been a subject of interest in medical image processing. However, super-resolved retinal images are a requisite tool for doctors to properly diagnose and treat ophthalmic diseases. The acquisition of high-quality images is challenging owing to several factors including technical hardware limitations, high cost, operator skills, data compatibility, and maintenance issues. This paper proposes Retinal-ESRGAN, a novel hybrid unsupervised GAN model particularly designed for retinal image super-resolution. The model incorporates architectural modifications with respect to generator and discriminator network using Google Colaboratory and TensorFlow 2.0 facilitating limited resource usage. To address resource constraints, a training strategy involving pausing and resuming in batches is implemented. The experiments conducted have demonstrated Retinal-ESRGAN’s potential that achieved an average PSNR of 35.22 dB and SSIM of 0.916, outperforming both SRGAN and ESRGAN showing PSNR metric improvement of 4.8% over SRGAN and 10.5% improvement over ESRGAN. Also, a 5.7% improvement over SRGAN and 22.4% improvement over ESRGAN in SSIM metric, Inception score of 6.02, Fréchet Inception Distance of 25.31 and accuracy of 94.98% utilizing significantly less training time and computational resources. |
|---|---|
| ISSN: | 2169-3536 |