The Role of Teacher Calibration in Knowledge Distillation
Knowledge Distillation (KD) has emerged as an effective model compression technique in deep learning, enabling the transfer of knowledge from a large teacher model to a compact student model. While KD has demonstrated significant success, it is not yet fully understood which factors contribute to im...
Saved in:
| Main Authors: | Suyoung Kim, Seonguk Park, Junhoo Lee, Nojun Kwak |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11062864/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Decoupled Time-Dimensional Progressive Self-Distillation With Knowledge Calibration for Edge Computing-Enabled AIoT
by: Yingchao Wang, et al.
Published: (2024-01-01) -
Aligning to the teacher: multilevel feature-aligned knowledge distillation
by: Yang Zhang, et al.
Published: (2025-08-01) -
LGFA-MTKD: Enhancing Multi-Teacher Knowledge Distillation with Local and Global Frequency Attention
by: Xin Cheng, et al.
Published: (2024-11-01) -
Autocorrelation Matrix Knowledge Distillation: A Task-Specific Distillation Method for BERT Models
by: Kai Zhang, et al.
Published: (2024-10-01) -
Transformer-Guided Serial Knowledge Distillation for High-Precision Anomaly Detection
by: Danyang Wang, et al.
Published: (2025-01-01)