Simultaneous Learning Knowledge Distillation for Image Restoration: Efficient Model Compression for Drones
Deploying high-performance image restoration models on drones is critical for applications like autonomous navigation, surveillance, and environmental monitoring. However, the computational and memory limitations of drones pose significant challenges to utilizing complex image restoration models in...
Saved in:
| Main Author: | |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-03-01
|
| Series: | Drones |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2504-446X/9/3/209 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850089896996241408 |
|---|---|
| author | Yongheng Zhang |
| author_facet | Yongheng Zhang |
| author_sort | Yongheng Zhang |
| collection | DOAJ |
| description | Deploying high-performance image restoration models on drones is critical for applications like autonomous navigation, surveillance, and environmental monitoring. However, the computational and memory limitations of drones pose significant challenges to utilizing complex image restoration models in real-world scenarios. To address this issue, we propose the <b>S</b>imultaneous <b>L</b>earning <b>K</b>nowledge <b>D</b>istillation (<b>SLKD</b>) framework, specifically designed to compress image restoration models for resource-constrained drones. SLKD introduces a dual-teacher, single-student architecture that integrates two complementary learning strategies: <b>D</b>egradation <b>R</b>emoval <b>L</b>earning (<b>DRL</b>) and <b>I</b>mage <b>R</b>econstruction <b>L</b>earning (<b>IRL</b>). In DRL, the student encoder learns to eliminate degradation factors by mimicking Teacher A, which processes degraded images utilizing a BRISQUE-based extractor to capture degradation-sensitive natural scene statistics. Concurrently, in IRL, the student decoder reconstructs clean images by learning from Teacher B, which processes clean images, guided by a PIQE-based extractor that emphasizes the preservation of edge and texture features essential for high-quality reconstruction. This dual-teacher approach enables the student model to learn from both degraded and clean images simultaneously, achieving robust image restoration while significantly reducing computational complexity. Experimental evaluations across five benchmark datasets and three restoration tasks—deraining, deblurring, and dehazing—demonstrate that, compared to the teacher models, the SLKD student models achieve an average reduction of 85.4% in FLOPs and 85.8% in model parameters, with only a slight average decrease of 2.6% in PSNR and 0.9% in SSIM. These results highlight the practicality of integrating SLKD-compressed models into autonomous systems, offering efficient and real-time image restoration for aerial platforms operating in challenging environments. |
| format | Article |
| id | doaj-art-d47aab5a207f430980f8802b5eb72d3a |
| institution | DOAJ |
| issn | 2504-446X |
| language | English |
| publishDate | 2025-03-01 |
| publisher | MDPI AG |
| record_format | Article |
| series | Drones |
| spelling | doaj-art-d47aab5a207f430980f8802b5eb72d3a2025-08-20T02:42:40ZengMDPI AGDrones2504-446X2025-03-019320910.3390/drones9030209Simultaneous Learning Knowledge Distillation for Image Restoration: Efficient Model Compression for DronesYongheng Zhang0School of Computer, Beijing University of Posts and Telecommunications, No. 10 Xitucheng Road, Haidian District, Beijing 100876, ChinaDeploying high-performance image restoration models on drones is critical for applications like autonomous navigation, surveillance, and environmental monitoring. However, the computational and memory limitations of drones pose significant challenges to utilizing complex image restoration models in real-world scenarios. To address this issue, we propose the <b>S</b>imultaneous <b>L</b>earning <b>K</b>nowledge <b>D</b>istillation (<b>SLKD</b>) framework, specifically designed to compress image restoration models for resource-constrained drones. SLKD introduces a dual-teacher, single-student architecture that integrates two complementary learning strategies: <b>D</b>egradation <b>R</b>emoval <b>L</b>earning (<b>DRL</b>) and <b>I</b>mage <b>R</b>econstruction <b>L</b>earning (<b>IRL</b>). In DRL, the student encoder learns to eliminate degradation factors by mimicking Teacher A, which processes degraded images utilizing a BRISQUE-based extractor to capture degradation-sensitive natural scene statistics. Concurrently, in IRL, the student decoder reconstructs clean images by learning from Teacher B, which processes clean images, guided by a PIQE-based extractor that emphasizes the preservation of edge and texture features essential for high-quality reconstruction. This dual-teacher approach enables the student model to learn from both degraded and clean images simultaneously, achieving robust image restoration while significantly reducing computational complexity. Experimental evaluations across five benchmark datasets and three restoration tasks—deraining, deblurring, and dehazing—demonstrate that, compared to the teacher models, the SLKD student models achieve an average reduction of 85.4% in FLOPs and 85.8% in model parameters, with only a slight average decrease of 2.6% in PSNR and 0.9% in SSIM. These results highlight the practicality of integrating SLKD-compressed models into autonomous systems, offering efficient and real-time image restoration for aerial platforms operating in challenging environments.https://www.mdpi.com/2504-446X/9/3/209knowledge distillationmodel compressiondrone-view image restoration |
| spellingShingle | Yongheng Zhang Simultaneous Learning Knowledge Distillation for Image Restoration: Efficient Model Compression for Drones Drones knowledge distillation model compression drone-view image restoration |
| title | Simultaneous Learning Knowledge Distillation for Image Restoration: Efficient Model Compression for Drones |
| title_full | Simultaneous Learning Knowledge Distillation for Image Restoration: Efficient Model Compression for Drones |
| title_fullStr | Simultaneous Learning Knowledge Distillation for Image Restoration: Efficient Model Compression for Drones |
| title_full_unstemmed | Simultaneous Learning Knowledge Distillation for Image Restoration: Efficient Model Compression for Drones |
| title_short | Simultaneous Learning Knowledge Distillation for Image Restoration: Efficient Model Compression for Drones |
| title_sort | simultaneous learning knowledge distillation for image restoration efficient model compression for drones |
| topic | knowledge distillation model compression drone-view image restoration |
| url | https://www.mdpi.com/2504-446X/9/3/209 |
| work_keys_str_mv | AT yonghengzhang simultaneouslearningknowledgedistillationforimagerestorationefficientmodelcompressionfordrones |