OPTNet: Optimized Pixel-Transformer Model for Adaptive Retinal Fundus Image Enhancement
Retinal low-quality images present significant challenges for accurate diagnosis and monitoring of eye diseases by obscuring critical anatomical features and reducing analytical precision. This study introduces OPTNet, an optimized pixel-wise transformer model designed to efficiently enhance degrade...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11113289/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849223106866970624 |
|---|---|
| author | Faisal Majed Alqahtani Somaya Adwan Mohd Yazed Ahmad Salmah Binti Karman |
| author_facet | Faisal Majed Alqahtani Somaya Adwan Mohd Yazed Ahmad Salmah Binti Karman |
| author_sort | Faisal Majed Alqahtani |
| collection | DOAJ |
| description | Retinal low-quality images present significant challenges for accurate diagnosis and monitoring of eye diseases by obscuring critical anatomical features and reducing analytical precision. This study introduces OPTNet, an optimized pixel-wise transformer model designed to efficiently enhance degraded or low-quality retinal images. The proposed approach consists of three main stages: 1) pre-processing to standardize image dimensions and balance color channels, 2) model development, in which a lightweight ANN-based feature extractor learns retinal structures and generates self-measured quality labels, and 3) pixel-level transformation guided by these predicted labels to perform localized enhancement. The performance of OPTNet was evaluated using statistical metrics across various architectures during training and testing, and benchmarked on six public retinal datasets: DRIVE, CHASE-DB1, HRF, DRHAGIS, FIRE, and FIVES. A comprehensive evaluation was conducted using both full-reference and no-reference quality assessment (QA) metrics, supported by qualitative analysis. OPTNet achieved competitive results including a 21.3% improvement in NIQE and 17.8% reduction in BRISQUE compared with existing methods. The final scores included SSIM (0.1925), VIF (0.1911), BIF (1.3018), EME (11.1704), NIQE (4.0730), and BRISQUE (30.3003), indicating perceptual and structural enhancement. Additionally, it effectively preserved brightness and anatomical fidelity while minimizing distortion (CD = 0.4214), blur (0.0889), and artifacts (0.2903). In conclusion, OPTNet outperforms state-of-the-art enhancement techniques by striking a robust balance between quality improvement and artifact suppression, demonstrating its strong potential for integration into clinical ophthalmic diagnostic pipelines. |
| format | Article |
| id | doaj-art-e06e2aa65c734a67806d60b10b18a56c |
| institution | Kabale University |
| issn | 2169-3536 |
| language | English |
| publishDate | 2025-01-01 |
| publisher | IEEE |
| record_format | Article |
| series | IEEE Access |
| spelling | doaj-art-e06e2aa65c734a67806d60b10b18a56c2025-08-25T23:12:59ZengIEEEIEEE Access2169-35362025-01-011314541614544110.1109/ACCESS.2025.359604511113289OPTNet: Optimized Pixel-Transformer Model for Adaptive Retinal Fundus Image EnhancementFaisal Majed Alqahtani0https://orcid.org/0009-0004-2676-1825Somaya Adwan1Mohd Yazed Ahmad2https://orcid.org/0000-0002-0674-2609Salmah Binti Karman3https://orcid.org/0000-0002-8635-5368Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, MalaysiaDepartment of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, MalaysiaDepartment of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, MalaysiaDepartment of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, MalaysiaRetinal low-quality images present significant challenges for accurate diagnosis and monitoring of eye diseases by obscuring critical anatomical features and reducing analytical precision. This study introduces OPTNet, an optimized pixel-wise transformer model designed to efficiently enhance degraded or low-quality retinal images. The proposed approach consists of three main stages: 1) pre-processing to standardize image dimensions and balance color channels, 2) model development, in which a lightweight ANN-based feature extractor learns retinal structures and generates self-measured quality labels, and 3) pixel-level transformation guided by these predicted labels to perform localized enhancement. The performance of OPTNet was evaluated using statistical metrics across various architectures during training and testing, and benchmarked on six public retinal datasets: DRIVE, CHASE-DB1, HRF, DRHAGIS, FIRE, and FIVES. A comprehensive evaluation was conducted using both full-reference and no-reference quality assessment (QA) metrics, supported by qualitative analysis. OPTNet achieved competitive results including a 21.3% improvement in NIQE and 17.8% reduction in BRISQUE compared with existing methods. The final scores included SSIM (0.1925), VIF (0.1911), BIF (1.3018), EME (11.1704), NIQE (4.0730), and BRISQUE (30.3003), indicating perceptual and structural enhancement. Additionally, it effectively preserved brightness and anatomical fidelity while minimizing distortion (CD = 0.4214), blur (0.0889), and artifacts (0.2903). In conclusion, OPTNet outperforms state-of-the-art enhancement techniques by striking a robust balance between quality improvement and artifact suppression, demonstrating its strong potential for integration into clinical ophthalmic diagnostic pipelines.https://ieeexplore.ieee.org/document/11113289/Retinal imagelow contrastpixel-transformeradaptive enhancementquantitative metricsvisual assessment |
| spellingShingle | Faisal Majed Alqahtani Somaya Adwan Mohd Yazed Ahmad Salmah Binti Karman OPTNet: Optimized Pixel-Transformer Model for Adaptive Retinal Fundus Image Enhancement IEEE Access Retinal image low contrast pixel-transformer adaptive enhancement quantitative metrics visual assessment |
| title | OPTNet: Optimized Pixel-Transformer Model for Adaptive Retinal Fundus Image Enhancement |
| title_full | OPTNet: Optimized Pixel-Transformer Model for Adaptive Retinal Fundus Image Enhancement |
| title_fullStr | OPTNet: Optimized Pixel-Transformer Model for Adaptive Retinal Fundus Image Enhancement |
| title_full_unstemmed | OPTNet: Optimized Pixel-Transformer Model for Adaptive Retinal Fundus Image Enhancement |
| title_short | OPTNet: Optimized Pixel-Transformer Model for Adaptive Retinal Fundus Image Enhancement |
| title_sort | optnet optimized pixel transformer model for adaptive retinal fundus image enhancement |
| topic | Retinal image low contrast pixel-transformer adaptive enhancement quantitative metrics visual assessment |
| url | https://ieeexplore.ieee.org/document/11113289/ |
| work_keys_str_mv | AT faisalmajedalqahtani optnetoptimizedpixeltransformermodelforadaptiveretinalfundusimageenhancement AT somayaadwan optnetoptimizedpixeltransformermodelforadaptiveretinalfundusimageenhancement AT mohdyazedahmad optnetoptimizedpixeltransformermodelforadaptiveretinalfundusimageenhancement AT salmahbintikarman optnetoptimizedpixeltransformermodelforadaptiveretinalfundusimageenhancement |