HEE-SegGAN: A holistically-nested edge enhanced GAN for pulmonary nodule segmentation.

Accurate segmentation of pulmonary nodules plays a critical role in monitoring disease progression and enabling early lung cancer screening. However, this task remains challenging due to the complex morphological variability of pulmonary nodules in CT images and the limited availability of well-anno...

Full description

Saved in:
Bibliographic Details
Main Authors: Yong Wang, Seri Mastura Mustaza, Mohammad Syuhaimi Ab-Rahman, Siti Salasiah Mokri
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2025-01-01
Series:PLoS ONE
Online Access:https://doi.org/10.1371/journal.pone.0328629
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Accurate segmentation of pulmonary nodules plays a critical role in monitoring disease progression and enabling early lung cancer screening. However, this task remains challenging due to the complex morphological variability of pulmonary nodules in CT images and the limited availability of well-annotated datasets. In this study, we proposed HEE-SegGAN, a holistically-nested edge-enhanced generative adversarial networks, which integrated HED-U-Net with a GAN framework to improve model robustness and edge segmentation accuracy. To incorporate spatial continuity, we constructed pseudo-color CT images by merging three consecutive lung CT slices into the RGB channels. The generator adopted the HED-U-Net, while the discriminator was implemented as a convolutional neural network. Two inverted residual modules were embedded within the HED-U-Net to fuse inter-slice spatial information and enhance salient features using a channel attention mechanism. Furthermore, we exploited the side outputs of HED-U-Net for deep supervision, ensuring that the generated results align with the statistical characteristics of real data. To mitigate mode collapse, we incorporated minibatch discrimination in the discriminator, encouraging diversity in the generated samples. We also improved the loss function to better capture edge-level details and enhance segmentation precision in edge regions. Finally, a series of ablation experiments on the LUNA16 dataset demonstrated the effectiveness of the proposed method. Compared to traditional 3D methods, our approach extracted features more efficiently while preserving spatial information and reducing computational requirements. The use of multi-scale feature maps in HED-U-Net enabled deeply supervised GAN training. The combination of feature matching and minibatch discrimination further improved model stability and segmentation performance. Overall, the proposed pipeline exhibited strong potential for accurate segmentation across a wide range of medical imaging tasks.
ISSN:1932-6203