Precision and efficiency in skin cancer segmentation through a dual encoder deep learning model

Abstract Skin cancer is a prevalent health concern, and accurate segmentation of skin lesions is crucial for early diagnosis. Existing methods for skin lesion segmentation often face trade-offs between efficiency and feature extraction capabilities. This paper proposes Dual Skin Segmentation (DuaSki...

Full description

Saved in:
Bibliographic Details
Main Authors: Asaad Ahmed, Guangmin Sun, Anas Bilal, Yu Li, Shouki A. Ebad
Format: Article
Language:English
Published: Nature Portfolio 2025-02-01
Series:Scientific Reports
Subjects:
Online Access:https://doi.org/10.1038/s41598-025-88753-3
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Skin cancer is a prevalent health concern, and accurate segmentation of skin lesions is crucial for early diagnosis. Existing methods for skin lesion segmentation often face trade-offs between efficiency and feature extraction capabilities. This paper proposes Dual Skin Segmentation (DuaSkinSeg), a deep-learning model, to address this gap by utilizing dual encoders for improved performance. DuaSkinSeg leverages a pre-trained MobileNetV2 for efficient local feature extraction. Subsequently, a Vision Transformer-Convolutional Neural Network (ViT-CNN) encoder-decoder architecture extracts higher-level features focusing on long-range dependencies. This approach aims to combine the efficiency of MobileNetV2 with the feature extraction capabilities of the ViT encoder for improved segmentation performance. To evaluate DuaSkinSeg’s effectiveness, we conducted experiments on three publicly available benchmark datasets: ISIC 2016, ISIC 2017, and ISIC 2018. The results demonstrate that DuaSkinSeg achieves competitive performance compared to existing methods, highlighting the potential of the dual encoder architecture for accurate skin lesion segmentation.
ISSN:2045-2322