Boundary-Aware Transformer for Optic Cup and Disc Segmentation in Fundus Images

Segmentation of the Optic Disc (OD) and Optic Cup (OC) boundaries in fundus images is a critical step for early glaucoma diagnosis, but accurate segmentation is challenging due to low boundary contrast and significant anatomical variability. To address these challenges, this study proposes a novel s...

Full description

Saved in:
Bibliographic Details
Main Authors: Soohyun Wang, Byoungkug Kim, Doo-Seop Eom
Format: Article
Language:English
Published: MDPI AG 2025-05-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/15/9/5165
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Segmentation of the Optic Disc (OD) and Optic Cup (OC) boundaries in fundus images is a critical step for early glaucoma diagnosis, but accurate segmentation is challenging due to low boundary contrast and significant anatomical variability. To address these challenges, this study proposes a novel segmentation framework that integrates structure-preserving data augmentation, Boundary-aware Transformer Attention (BAT), and Geometry-aware Loss. We enhance data diversity while preserving vascular and tissue structures through truncated Gaussian-based sampling and colormap transformations. BAT strengthens boundary recognition by globally learning the inclusion relationship between the OD and OC within the skip connection paths of U-Net. Additionally, Geometry-aware Loss, which combines the normalized Hausdorff Distance with the Dice Loss, reduces fine-grained boundary errors and improves boundary precision. The proposed model outperforms existing state-of-the-art models across five public datasets—DRIONS-DB, Drishti-GS, REFUGE, G1020, and ORIGA—and achieves Dice scores of 0.9127 on Drishti-GS and 0.9014 on REFUGE for OC segmentation. For joint segmentation of the OD and OC, it attains high Dice scores of 0.9892 on REFUGE, 0.9782 on G1020, and 0.9879 on ORIGA. Ablation studies validate the independent contributions of each component and demonstrate their synergistic effect when combined. Furthermore, the proposed model more accurately captures the relative size and spatial alignment of the OD and OC and produces smooth and consistent boundary predictions in clinically significant regions such as the region of interest (ROI). These results support the clinical applicability of the proposed method in medical image analysis tasks requiring precise, boundary-focused segmentation.
ISSN:2076-3417