A Hardware Accelerator for the Inference of a Convolutional Neural network
Convolutional Neural Networks (CNNs) are becoming increasingly popular in deep learning applications, e.g. image classification, speech recognition, medicine, to name a few. However, the CNN inference is computationally intensive and demanding a large among of memory resources. In this work is prop...
Saved in:
Main Authors: | Edwin González, Walter D. Villamizar Luna, Carlos Augusto Fajardo Ariza |
---|---|
Format: | Article |
Language: | English |
Published: |
Editorial Neogranadina
2019-11-01
|
Series: | Ciencia e Ingeniería Neogranadina |
Subjects: | |
Online Access: | https://revistasunimilitareduco.biteca.online/index.php/rcin/article/view/4194 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
FPGA-QNN: Quantized Neural Network Hardware Acceleration on FPGAs
by: Mustafa Tasci, et al.
Published: (2025-01-01) -
A Pipelined Hardware Design of FNTT and INTT of CRYSTALS-Kyber PQC Algorithm
by: Muhammad Rashid, et al.
Published: (2024-12-01) -
An efficient loop tiling framework for convolutional neural network inference accelerators
by: Hongmin Huang, et al.
Published: (2022-01-01) -
A Survey on Hardware Accelerators for Large Language Models
by: Christoforos Kachris
Published: (2025-01-01) -
CNN Accelerator Performance Dependence on Loop Tiling and the Optimum Resource-Constrained Loop Tiling
by: Chester Sungchung Park, et al.
Published: (2025-01-01)