Proposal of an open-source accelerators library for inference of transformer networks in edge devices based on Linux

Transformers networks have been a great milestone in the natural language processing field, and have powered technologies like ChatGPT, which are undeniably changing people’s lives. This article discusses the characteristics and computational complexity of Transformers networks, as well as, the pote...

Full description

Saved in:
Bibliographic Details
Main Authors: Alejandro Araya-Núñez, Justin Fernández-Badilla, Daniel González-Vargas González-Vargas, Jimena León-Huertas, Erick-Andrés Obregón-Fonseca, Danny Xie-Li
Format: Article
Language:English
Published: Instituto Tecnológico de Costa Rica 2024-06-01
Series:Tecnología en Marcha
Subjects:
Online Access:https://revistas.tec.ac.cr/index.php/tec_marcha/article/view/7225
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Transformers networks have been a great milestone in the natural language processing field, and have powered technologies like ChatGPT, which are undeniably changing people’s lives. This article discusses the characteristics and computational complexity of Transformers networks, as well as, the potential for improving its performance in low-resource environments through the use of hardware accelerators. This research has the potential to significantly improve the performance of Transformers in edge and low-end devices. In addition, Edge Artificial Intelligence, Hardware Acceleration, and Tiny Machine Learning algorithms are explored. The proposed methodology includes a software and hardware layer, with a Linux-based minimal image built on top of a synthesized RTL. The proposal also includes a library of hardware accelerators that can be customized to select the desired accelerators based on the device’s resources and operations to be accelerated.
ISSN:0379-3982
2215-3241