Optimizing Deep Learning Models for Resource‐Constrained Environments With Cluster‐Quantized Knowledge Distillation
ABSTRACT Deep convolutional neural networks (CNNs) are highly effective in computer vision tasks but remain challenging to deploy in resource‐constrained environments due to their high computational and memory requirements. Conventional model compression techniques, such as pruning and post‐training...
Saved in:
| Main Authors: | Niaz Ashraf Khan, A. M. Saadman Rafat |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Wiley
2025-05-01
|
| Series: | Engineering Reports |
| Subjects: | |
| Online Access: | https://doi.org/10.1002/eng2.70187 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Fully Quantized Neural Networks for Audio Source Separation
by: Elad Cohen, et al.
Published: (2024-01-01) -
TCL: Time-Dependent Clustering Loss for Optimizing Post-Training Feature Map Quantization for Partitioned DNNs
by: Oscar Artur Bernd Berg, et al.
Published: (2025-01-01) -
Knowledge Distillation in Object Detection for Resource-Constrained Edge Computing
by: Arief Setyanto, et al.
Published: (2025-01-01) -
Optimal Knowledge Distillation through Non-Heuristic Control of Dark Knowledge
by: Darian Onchis, et al.
Published: (2024-08-01) -
Mixed precision quantization based on information entropy
by: Ting Qin, et al.
Published: (2025-04-01)