Custom Network Quantization Method for Lightweight CNN Acceleration on FPGAs
The low-bit quantization can effectively reduce the deep neural network storage as well as the computation costs. Existing quantization methods have yielded unsatisfactory results when being applied to lightweight networks. Additionally, following network quantization, the differences in data types...
Saved in:
| Main Authors: | Lingjie Yi, Xianzhong Xie, Yi Wan, Bo Jiang, Junfan Chen |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Wiley
2024-01-01
|
| Series: | International Journal of Distributed Sensor Networks |
| Online Access: | http://dx.doi.org/10.1155/2024/8018810 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
FPGA-QNN: Quantized Neural Network Hardware Acceleration on FPGAs
by: Mustafa Tasci, et al.
Published: (2025-01-01) -
Better Scalability: Improvement of Block-Based CNN Accelerator for FPGAs
by: Yan Chen, et al.
Published: (2024-01-01) -
A Configurable Accelerator for CNN-Based Remote Sensing Object Detection on FPGAs
by: Yingzhao Shao, et al.
Published: (2024-01-01) -
Accelerating machine learning at the edge with approximate computing on FPGAs
by: Luis Gerardo León-Vega, et al.
Published: (2022-11-01) -
FPGAs for Domain Experts
by: Wim Vanderbauwhede, et al.
Published: (2020-01-01)