Efficient nonlinear function approximation in analog resistive crossbars for recurrent neural networks

Abstract Analog In-memory Computing (IMC) has demonstrated energy-efficient and low latency implementation of convolution and fully-connected layers in deep neural networks (DNN) by using physics for computing in parallel resistive memory arrays. However, recurrent neural networks (RNN) that are wid...

Full description

Saved in:
Bibliographic Details
Main Authors: Junyi Yang, Ruibin Mao, Mingrui Jiang, Yichuan Cheng, Pao-Sheng Vincent Sun, Shuai Dong, Giacomo Pedretti, Xia Sheng, Jim Ignowski, Haoliang Li, Can Li, Arindam Basu
Format: Article
Language:English
Published: Nature Portfolio 2025-01-01
Series:Nature Communications
Online Access:https://doi.org/10.1038/s41467-025-56254-6
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832571541170159616
author Junyi Yang
Ruibin Mao
Mingrui Jiang
Yichuan Cheng
Pao-Sheng Vincent Sun
Shuai Dong
Giacomo Pedretti
Xia Sheng
Jim Ignowski
Haoliang Li
Can Li
Arindam Basu
author_facet Junyi Yang
Ruibin Mao
Mingrui Jiang
Yichuan Cheng
Pao-Sheng Vincent Sun
Shuai Dong
Giacomo Pedretti
Xia Sheng
Jim Ignowski
Haoliang Li
Can Li
Arindam Basu
author_sort Junyi Yang
collection DOAJ
description Abstract Analog In-memory Computing (IMC) has demonstrated energy-efficient and low latency implementation of convolution and fully-connected layers in deep neural networks (DNN) by using physics for computing in parallel resistive memory arrays. However, recurrent neural networks (RNN) that are widely used for speech-recognition and natural language processing have tasted limited success with this approach. This can be attributed to the significant time and energy penalties incurred in implementing nonlinear activation functions that are abundant in such models. In this work, we experimentally demonstrate the implementation of a non-linear activation function integrated with a ramp analog-to-digital conversion (ADC) at the periphery of the memory to improve in-memory implementation of RNNs. Our approach uses an extra column of memristors to produce an appropriately pre-distorted ramp voltage such that the comparator output directly approximates the desired nonlinear function. We experimentally demonstrate programming different nonlinear functions using a memristive array and simulate its incorporation in RNNs to solve keyword spotting and language modelling tasks. Compared to other approaches, we demonstrate manifold increase in area-efficiency, energy-efficiency and throughput due to the in-memory, programmable ramp generator that removes digital processing overhead.
format Article
id doaj-art-b5c6ab6ced4645848d10e4702514652c
institution Kabale University
issn 2041-1723
language English
publishDate 2025-01-01
publisher Nature Portfolio
record_format Article
series Nature Communications
spelling doaj-art-b5c6ab6ced4645848d10e4702514652c2025-02-02T12:33:17ZengNature PortfolioNature Communications2041-17232025-01-0116111510.1038/s41467-025-56254-6Efficient nonlinear function approximation in analog resistive crossbars for recurrent neural networksJunyi Yang0Ruibin Mao1Mingrui Jiang2Yichuan Cheng3Pao-Sheng Vincent Sun4Shuai Dong5Giacomo Pedretti6Xia Sheng7Jim Ignowski8Haoliang Li9Can Li10Arindam Basu11Department of Electrical Engineering, City University of Hong KongDepartment of Electrical and Electronic Engineering, The University of Hong KongDepartment of Electrical and Electronic Engineering, The University of Hong KongDepartment of Electrical Engineering, City University of Hong KongDepartment of Electrical Engineering, City University of Hong KongDepartment of Electrical Engineering, City University of Hong KongHewlett Packard Labs, Hewlett Packard EnterpriseHewlett Packard Labs, Hewlett Packard EnterpriseHewlett Packard Labs, Hewlett Packard EnterpriseDepartment of Electrical Engineering, City University of Hong KongDepartment of Electrical and Electronic Engineering, The University of Hong KongDepartment of Electrical Engineering, City University of Hong KongAbstract Analog In-memory Computing (IMC) has demonstrated energy-efficient and low latency implementation of convolution and fully-connected layers in deep neural networks (DNN) by using physics for computing in parallel resistive memory arrays. However, recurrent neural networks (RNN) that are widely used for speech-recognition and natural language processing have tasted limited success with this approach. This can be attributed to the significant time and energy penalties incurred in implementing nonlinear activation functions that are abundant in such models. In this work, we experimentally demonstrate the implementation of a non-linear activation function integrated with a ramp analog-to-digital conversion (ADC) at the periphery of the memory to improve in-memory implementation of RNNs. Our approach uses an extra column of memristors to produce an appropriately pre-distorted ramp voltage such that the comparator output directly approximates the desired nonlinear function. We experimentally demonstrate programming different nonlinear functions using a memristive array and simulate its incorporation in RNNs to solve keyword spotting and language modelling tasks. Compared to other approaches, we demonstrate manifold increase in area-efficiency, energy-efficiency and throughput due to the in-memory, programmable ramp generator that removes digital processing overhead.https://doi.org/10.1038/s41467-025-56254-6
spellingShingle Junyi Yang
Ruibin Mao
Mingrui Jiang
Yichuan Cheng
Pao-Sheng Vincent Sun
Shuai Dong
Giacomo Pedretti
Xia Sheng
Jim Ignowski
Haoliang Li
Can Li
Arindam Basu
Efficient nonlinear function approximation in analog resistive crossbars for recurrent neural networks
Nature Communications
title Efficient nonlinear function approximation in analog resistive crossbars for recurrent neural networks
title_full Efficient nonlinear function approximation in analog resistive crossbars for recurrent neural networks
title_fullStr Efficient nonlinear function approximation in analog resistive crossbars for recurrent neural networks
title_full_unstemmed Efficient nonlinear function approximation in analog resistive crossbars for recurrent neural networks
title_short Efficient nonlinear function approximation in analog resistive crossbars for recurrent neural networks
title_sort efficient nonlinear function approximation in analog resistive crossbars for recurrent neural networks
url https://doi.org/10.1038/s41467-025-56254-6
work_keys_str_mv AT junyiyang efficientnonlinearfunctionapproximationinanalogresistivecrossbarsforrecurrentneuralnetworks
AT ruibinmao efficientnonlinearfunctionapproximationinanalogresistivecrossbarsforrecurrentneuralnetworks
AT mingruijiang efficientnonlinearfunctionapproximationinanalogresistivecrossbarsforrecurrentneuralnetworks
AT yichuancheng efficientnonlinearfunctionapproximationinanalogresistivecrossbarsforrecurrentneuralnetworks
AT paoshengvincentsun efficientnonlinearfunctionapproximationinanalogresistivecrossbarsforrecurrentneuralnetworks
AT shuaidong efficientnonlinearfunctionapproximationinanalogresistivecrossbarsforrecurrentneuralnetworks
AT giacomopedretti efficientnonlinearfunctionapproximationinanalogresistivecrossbarsforrecurrentneuralnetworks
AT xiasheng efficientnonlinearfunctionapproximationinanalogresistivecrossbarsforrecurrentneuralnetworks
AT jimignowski efficientnonlinearfunctionapproximationinanalogresistivecrossbarsforrecurrentneuralnetworks
AT haoliangli efficientnonlinearfunctionapproximationinanalogresistivecrossbarsforrecurrentneuralnetworks
AT canli efficientnonlinearfunctionapproximationinanalogresistivecrossbarsforrecurrentneuralnetworks
AT arindambasu efficientnonlinearfunctionapproximationinanalogresistivecrossbarsforrecurrentneuralnetworks