A Low Power Memory-Integrated Hardware BNN-MLP Model on an FPGA for Current Signals in a Biosensor

This paper presents a method for processing and digitizing the current signal information output from biosensors and a hardware Artificial Intelligence (AI) model design that classifies data using low-power and compact AI algorithms to minimize the high-power consumption and on chip area of the Conv...

Full description

Saved in:
Bibliographic Details
Main Authors: Geon-Hoe Kim, Dong-Gyun Kim, Sung-Jae Lee, Jong-Han Kim, Da-Yeong An, Hyejin Kim, Young-Gun Pu, Heejeong Jasmine Lee, Jun-Eun Park, Kang-Yoon Lee
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11008630/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper presents a method for processing and digitizing the current signal information output from biosensors and a hardware Artificial Intelligence (AI) model design that classifies data using low-power and compact AI algorithms to minimize the high-power consumption and on chip area of the Convolutional Neural Network (CNN) models used in biosensors. The dataset was built using a data sampling method that digitizes the current signal values from the biosensor by sampling them 16 times with a 16-bit Analog-to-Digital Converter (ADC), enabling feature extraction in advance. To reduce power consumption and area, a BNN-MLP model without an extraction layer was designed, and to improve accuracy, a dense layer was added to the final layer. This approach enhances accuracy while using binary weights. The BNN-MLP model was designed using TensorFlow for the software model and implemented in hardware at the Register Transfer Level (RTL). To verify the similarity between the hardware-implemented BNN-MLP model and the software BNN-MLP model, classification was performed on a test dataset of 1,000 samples using a Field Programmable Gate Array (FPGA), and the classification accuracy for each class was compared. When the BNN-MLP model is implemented on an FPGA, it utilizes 7,994 Look-Up Tables (LUT) and 9,780 Flip-Flops (FF), occupying fewer resources compared to previous studies. Additionally, it operates at a lower power consumption of 0.157 W, demonstrating the highest power efficiency performance of 407.6 GOPS/W. Finally, the inference time for a single data point was 0.018 ms, which is much faster than in previous studies, confirming its potential for low-power and compact applications in biosensors.
ISSN:2169-3536