FPG-AI RNN: A Technology-Agnostic Framework for the Automatic Acceleration of LSTM/GRU-Based Models on FPGAs

Recurrent Neural Networks (RNNs) are pivotal in artificial intelligence, excelling in tasks involving sequential data across fields such as natural language processing and time-series forecasting. FPGAs have emerged as an efficient technology for accelerating these algorithms, especially in resource...

Full description

Saved in:
Bibliographic Details
Main Authors: Tommaso Pacini, Pietro Nannipieri, Silvia Moranti, Luca Fanucci
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11027895/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Recurrent Neural Networks (RNNs) are pivotal in artificial intelligence, excelling in tasks involving sequential data across fields such as natural language processing and time-series forecasting. FPGAs have emerged as an efficient technology for accelerating these algorithms, especially in resource- and power-constrained platforms such as edge devices. To improve the accessibility of this technology, both academia and industry are exploring the design of automation toolflows. This article proposes FPG-AI RNN, a novel technology-agnostic RNN-to-FPGA framework that enables the fast deployment of LSTM- and GRU-based models on FPGAs belonging to different vendors and with diverse resource budgets, outclassing state-of-the-art solutions in terms of device portability. The toolflow leverages post-training compression techniques to reduce model complexity and streamline implementation. The developed accelerator is a highly tunable Hardware Description Language (HDL)-based architecture featuring no third-party Intellectual Properties (IPs). An iterative algorithm explores the parameters’ space of the underlying architecture, selecting a point that meets the user-defined constraints on the target RNN-FPGA pair, resource consumption, and performance. To demonstrate the technology independence of our solution, we collect results for a heterogeneous set of models on low/mid/high range FPGAs belonging to AMD Xilinx, Intel, and Microchip. A comparison with state-of-the-art solutions targeting an LSTM-based model for sentence classification highlights the unmatched device portability of FPG-AI RNN and shows metrics (inference time, resource utilization, power consumption, post-quantization accuracy) on par with the best available solutions.
ISSN:2169-3536