Stepwise self-knowledge distillation for skin lesion image classification
Abstract Self-knowledge distillation, which involves using the same network structure for both the teacher and student models, has gained considerable attention in the field of medical image classification. This approach enables knowledge distillation without requiring pre-training the teacher model...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-07-01
|
| Series: | Scientific Reports |
| Online Access: | https://doi.org/10.1038/s41598-025-10717-4 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Abstract Self-knowledge distillation, which involves using the same network structure for both the teacher and student models, has gained considerable attention in the field of medical image classification. This approach enables knowledge distillation without requiring pre-training the teacher model. However, current self-knowledge distillation methods encounter difficulties in determining appropriate learning objectives for the next stage, which limits the improvement potential of the student model. In this paper, we present a Stepwise Self-Knowledge Distillation framework called SW-SKD, which is utilized to enhance the performance of dermatological image classification. Our framework incorporates a stepwise distillation strategy to efficiently explore the learning objectives by the feature rectification block (FRB) and the logit rectification block (LRB). In the FRB block, we extract the attention of the last stage of the network backbone and consider the attention-corrected features as the learning objective. The stepwise distillation based on FRB is accomplished by performing attention-based intermediate feature distillation from back to front; the LRB block implements logit-based knowledge distillation by adjusting the maximum value of the logit prediction output to match the correct index. This adjustment based on LRB serves as the learning objective for the next stage, progressing from back to front. Our proposed SW-SKD framework effectively improves dermatological image classification. To prove its effectiveness, extensive experiments are conducted on HAM10000, ISIC2019, and Dermnet datasets. On HAM10000 with ResNet50 and ResNet101 serving as the baseline networks, compared to the second-best method, Precision improves by 0.8% and 1.4%, and Recall by 2.1% and 0.9% after weighted averaging. On ISIC2019 with the same baseline networks, average Precision improves by 0.5% and 0.9%, and average Recall by 1.1% and 0.7%. It also outperforms other mainstream methods. The results show SW-SKD can significantly enhance the student model’s performance in dermatological classification. |
|---|---|
| ISSN: | 2045-2322 |