Efficient microaneurysm segmentation in retinal images via a lightweight Attention U-Net for early DR diagnosis
Diabetic Retinopathy (DR) is a complication of diabetes that can cause vision impairment and lead to permanent blindness if left undiagnosed. The increasing number of diabetic patients, coupled with a shortage of ophthalmologists, highlights the urgent need for automated screening tools for early DR...
Saved in:
| Main Authors: | , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Elsevier
2025-10-01
|
| Series: | SLAS Technology |
| Subjects: | |
| Online Access: | http://www.sciencedirect.com/science/article/pii/S2472630325000810 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Diabetic Retinopathy (DR) is a complication of diabetes that can cause vision impairment and lead to permanent blindness if left undiagnosed. The increasing number of diabetic patients, coupled with a shortage of ophthalmologists, highlights the urgent need for automated screening tools for early DR diagnosis. Among the earliest and most detectable signs of DR are microaneurysms (MAs). However, detecting MAs in fundus images remains challenging due to several factors, including image quality limitations, the subtle appearance of MA features, and the wide variability in color, shape, and texture. To address these challenges, we propose a novel preprocessing pipeline that enhances the overall image quality, facilitating feature learning and improving the detection of subtle MA features in low-quality fundus images. Building on this preprocessing technique, we further develop a lightweight Attention U-Net model that significantly reduces the number of model parameters while achieving superior performance. By incorporating an attention mechanism, the model focuses on the subtle features of MAs, leading to more precise segmentation results. We evaluated our method on the IDRID dataset, achieving a sensitivity of 0.81 and specificity of 0.99, outperforming existing MA segmentation models. To validate its generalizability, we tested it on the E-Ophtha dataset, where it achieved a sensitivity of 0.59 and specificity of 0.99. Despite its lightweight design, our model demonstrates robust performance under challenging conditions such as noise and varying lighting, making it a promising tool for clinical applications and large-scale DR screening. |
|---|---|
| ISSN: | 2472-6303 |