CDSE-UNet: Enhancing COVID-19 CT Image Segmentation With Canny Edge Detection and Dual-Path SENet Feature Fusion

Accurate segmentation of COVID-19 CT images is crucial for reducing the severity and mortality rates associated with COVID-19 infections. In response to blurred boundaries and high variability characteristic of lesion areas in COVID-19 CT images, we introduce CDSE-UNet: a novel UNet-based segmentati...

Full description

Saved in:
Bibliographic Details
Main Authors: Jiao Ding, Jie Chang, Renrui Han, Li Yang
Format: Article
Language:English
Published: Wiley 2025-01-01
Series:International Journal of Biomedical Imaging
Online Access:http://dx.doi.org/10.1155/ijbi/9175473
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Accurate segmentation of COVID-19 CT images is crucial for reducing the severity and mortality rates associated with COVID-19 infections. In response to blurred boundaries and high variability characteristic of lesion areas in COVID-19 CT images, we introduce CDSE-UNet: a novel UNet-based segmentation model that integrates Canny operator edge detection and a Dual-Path SENet Feature Fusion Block (DSBlock). This model enhances the standard UNet architecture by employing the Canny operator for edge detection in sample images, paralleling this with a similar network structure for semantic feature extraction. A key innovation is the DSBlock, applied across corresponding network layers to effectively combine features from both image paths. Moreover, we have developed a Multiscale Convolution Block (MSCovBlock), replacing the standard convolution in UNet, to adapt to the varied lesion sizes and shapes. This addition not only aids in accurately classifying lesion edge pixels but also significantly improves channel differentiation and expands the capacity of the model. Our evaluations on public datasets demonstrate CDSE-UNet’s superior performance over other leading models. Specifically, CDSE-UNet achieved an accuracy of 0.9929, a recall of 0.9604, a DSC of 0.9063, and an IoU of 0.8286, outperforming UNet, Attention-UNet, Trans-Unet, Swin-Unet, and Dense-UNet in these metrics.
ISSN:1687-4196