Multiclass skin lesion classification and localziation from dermoscopic images using a novel network-level fused deep architecture and explainable artificial intelligence
Abstract Background and objective Early detection and classification of skin cancer are critical for improving patient outcomes. Dermoscopic image analysis using Computer-Aided Diagnostics (CAD) is a powerful tool to assist dermatologists in identifying and classifying skin lesions. Traditional mach...
Saved in:
| Main Authors: | , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
BMC
2025-07-01
|
| Series: | BMC Medical Informatics and Decision Making |
| Subjects: | |
| Online Access: | https://doi.org/10.1186/s12911-025-03051-2 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Abstract Background and objective Early detection and classification of skin cancer are critical for improving patient outcomes. Dermoscopic image analysis using Computer-Aided Diagnostics (CAD) is a powerful tool to assist dermatologists in identifying and classifying skin lesions. Traditional machine learning models require extensive feature engineering, which is time-consuming and less effective in handling complex data like skin lesions. This study proposes a deep learning-based network-level fusion architecture that integrates multiple deep models to enhance the classification and localization of skin lesions in dermoscopic images. The goal is to address challenges like irregular lesion shapes, inter-class similarities, and class imbalances while providing explainability through artificial intelligence. Methods A novel hybrid contrast enhancement technique was applied for pre-processing and dataset augmentation. Two deep learning models, a 5-block inverted residual network and a 6-block inverted bottleneck network, were designed and fused at the network level using a depth concatenation approach. The models were trained using Bayesian optimization for hyperparameter tuning. Feature extraction was performed with a global average pooling layer, and shallow neural networks were used for final classification. Explainable AI techniques, including LIME, were used to interpret model predictions and localize lesion regions. Experiments were conducted on two publicly available datasets, HAM10000 and ISIC2018, which were split into training and testing sets. Results The proposed fused architecture achieved high classification accuracy, with results of 91.3% and 90.7% on the HAM10000 and ISIC2018 datasets, respectively. Sensitivity, precision, and F1-scores were significantly improved after data augmentation, with precision rates of up to 90.91%. The explainable AI component effectively localized lesion areas with high confidence, enhancing the model’s interpretability. Conclusions The network-level fusion architecture combined with explainable AI techniques significantly improved the classification and localization of skin lesions. The augmentation and contrast enhancement processes enhanced lesion visibility, while fusion of models optimized classification accuracy. This approach shows potential for implementation in CAD systems for skin cancer diagnosis, although future work is required to address the limitations of computational resource requirements and training time. Clinical trail number Not applicable. |
|---|---|
| ISSN: | 1472-6947 |