SAM-Guided Accurate Pulmonary Nodule Image Segmentation
Addressing the challenges of inaccurate lung nodule segmentation due to significant scale variations, indistinct boundary textures, and intense background noise, this study introduces a Segment Anything model (SAM)-based feature-enhanced U-Net algorithm for lung nodule segmentation, incorporating th...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11030594/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Addressing the challenges of inaccurate lung nodule segmentation due to significant scale variations, indistinct boundary textures, and intense background noise, this study introduces a Segment Anything model (SAM)-based feature-enhanced U-Net algorithm for lung nodule segmentation, incorporating the advantages of a Transformer structure and the traditional 3D U-Net framework. Specifically, the Transformer is employed to globally extract structural features of lung nodules and adjacent tissues, whereas a shallow 3D U-Net focuses on capturing image texture characteristics. This manner leverages both structural and texture features for feature enhancement, which are significant for accurate segment the lung nodules. Furthermore, guided by the SAM, the U-Net architecture is refined to assimilate multi-scale information post-feature enhancement, with deep semantic information being re-utilized from the U-Net decoder to achieve precise lung nodule segmentation. The proposed model is evaluated on the LUNA16(Lung Nodule Analysis 2016) and LNDb(Lung Nodule Database) lung nodule segmentation datasets, yielding promising results specifically on the LUNA16 data: Precision achieves 91. 20%, Sensitivity reaches 89. 95%, and the Dice Similarity Coefficient (DSC) achieves 98. 90%. These findings not only showcase the superior performance of our model in lung nodule segmentation, but also underscore its potential to significantly improve segmentation accuracy compared to existing methodologies. |
|---|---|
| ISSN: | 2169-3536 |