Reducing Memory and Computational Cost for Deep Neural Network Training with Quantized Parameter Updates
For embedded devices, both memory and computational efficiency are essential due to their constrained resources. However, neural network training remains both computation and memory intensive. Although many existing studies apply quantization schemes to mitigate memory overhead, they often employ st...
Saved in:
| Main Authors: | Leo Buron, Andreas Erbslöh, Gregor Schiele |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Graz University of Technology
2025-08-01
|
| Series: | Journal of Universal Computer Science |
| Subjects: | |
| Online Access: | https://lib.jucs.org/article/164737/download/pdf/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Smoothed per-tensor weight quantization: a robust solution for neural network deployment
by: Xin Chang
Published: (2025-07-01) -
Quantization for a Condensation System
by: Shivam Dubey, et al.
Published: (2025-04-01) -
Randomized Quantization for Privacy in Resource Constrained Machine Learning at-the-Edge and Federated Learning
by: Ce Feng, et al.
Published: (2025-01-01) -
Enhanced Vector Quantization for Embedded Machine Learning: A Post-Training Approach With Incremental Clustering
by: Thommas K. S. Flores, et al.
Published: (2025-01-01) -
ClipQ: Clipping Optimization for the Post-Training Quantization of Convolutional Neural Network
by: Yiming Chen, et al.
Published: (2025-04-01)