A Sliding‐Kernel Computation‐In‐Memory Architecture for Convolutional Neural Network

Abstract Presently described is a sliding‐kernel computation‐in‐memory (SKCIM) architecture conceptually involving two overlapping layers of functional arrays, one containing memory elements and artificial synapses for neuromorphic computation, the other is used for storing and sliding convolutional...

Full description

Saved in:
Bibliographic Details
Main Authors: Yushen Hu, Xinying Xie, Tengteng Lei, Runxiao Shi, Man Wong
Format: Article
Language:English
Published: Wiley 2024-12-01
Series:Advanced Science
Subjects:
Online Access:https://doi.org/10.1002/advs.202407440
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850247177761193984
author Yushen Hu
Xinying Xie
Tengteng Lei
Runxiao Shi
Man Wong
author_facet Yushen Hu
Xinying Xie
Tengteng Lei
Runxiao Shi
Man Wong
author_sort Yushen Hu
collection DOAJ
description Abstract Presently described is a sliding‐kernel computation‐in‐memory (SKCIM) architecture conceptually involving two overlapping layers of functional arrays, one containing memory elements and artificial synapses for neuromorphic computation, the other is used for storing and sliding convolutional kernel matrices. A low‐temperature metal‐oxide thin‐film transistor (TFT) technology capable of monolithically integrating single‐gate TFTs, dual‐gate TFTs, and memory capacitors is deployed for the construction of a physical SKCIM system. Exhibiting an 88% reduction in memory access operations compared to state‐of‐the‐art systems, a 32 × 32 SKCIM system is applied to execute common convolution tasks. A more involved demonstration is the application of a 5‐layer, SKCIM‐based convolutional neural network to the classification of the modified national institute of standards and technology (MNIST) dataset of handwritten numerals, achieving an accuracy rate of over 95%.
format Article
id doaj-art-c4e1c44d2e694cb39749d844fc7832e3
institution OA Journals
issn 2198-3844
language English
publishDate 2024-12-01
publisher Wiley
record_format Article
series Advanced Science
spelling doaj-art-c4e1c44d2e694cb39749d844fc7832e32025-08-20T01:59:00ZengWileyAdvanced Science2198-38442024-12-011146n/an/a10.1002/advs.202407440A Sliding‐Kernel Computation‐In‐Memory Architecture for Convolutional Neural NetworkYushen Hu0Xinying Xie1Tengteng Lei2Runxiao Shi3Man Wong4State Key Laboratory of Advanced Displays and Optoelectronics Technologies Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology (HKUST) Hong Kong ChinaState Key Laboratory of Advanced Displays and Optoelectronics Technologies Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology (HKUST) Hong Kong ChinaState Key Laboratory of Advanced Displays and Optoelectronics Technologies Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology (HKUST) Hong Kong ChinaState Key Laboratory of Advanced Displays and Optoelectronics Technologies Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology (HKUST) Hong Kong ChinaState Key Laboratory of Advanced Displays and Optoelectronics Technologies Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology (HKUST) Hong Kong ChinaAbstract Presently described is a sliding‐kernel computation‐in‐memory (SKCIM) architecture conceptually involving two overlapping layers of functional arrays, one containing memory elements and artificial synapses for neuromorphic computation, the other is used for storing and sliding convolutional kernel matrices. A low‐temperature metal‐oxide thin‐film transistor (TFT) technology capable of monolithically integrating single‐gate TFTs, dual‐gate TFTs, and memory capacitors is deployed for the construction of a physical SKCIM system. Exhibiting an 88% reduction in memory access operations compared to state‐of‐the‐art systems, a 32 × 32 SKCIM system is applied to execute common convolution tasks. A more involved demonstration is the application of a 5‐layer, SKCIM‐based convolutional neural network to the classification of the modified national institute of standards and technology (MNIST) dataset of handwritten numerals, achieving an accuracy rate of over 95%.https://doi.org/10.1002/advs.202407440convolutional computingconvolutional neural networkmetal‐oxideneuromorphic computingthin film transistor
spellingShingle Yushen Hu
Xinying Xie
Tengteng Lei
Runxiao Shi
Man Wong
A Sliding‐Kernel Computation‐In‐Memory Architecture for Convolutional Neural Network
Advanced Science
convolutional computing
convolutional neural network
metal‐oxide
neuromorphic computing
thin film transistor
title A Sliding‐Kernel Computation‐In‐Memory Architecture for Convolutional Neural Network
title_full A Sliding‐Kernel Computation‐In‐Memory Architecture for Convolutional Neural Network
title_fullStr A Sliding‐Kernel Computation‐In‐Memory Architecture for Convolutional Neural Network
title_full_unstemmed A Sliding‐Kernel Computation‐In‐Memory Architecture for Convolutional Neural Network
title_short A Sliding‐Kernel Computation‐In‐Memory Architecture for Convolutional Neural Network
title_sort sliding kernel computation in memory architecture for convolutional neural network
topic convolutional computing
convolutional neural network
metal‐oxide
neuromorphic computing
thin film transistor
url https://doi.org/10.1002/advs.202407440
work_keys_str_mv AT yushenhu aslidingkernelcomputationinmemoryarchitectureforconvolutionalneuralnetwork
AT xinyingxie aslidingkernelcomputationinmemoryarchitectureforconvolutionalneuralnetwork
AT tengtenglei aslidingkernelcomputationinmemoryarchitectureforconvolutionalneuralnetwork
AT runxiaoshi aslidingkernelcomputationinmemoryarchitectureforconvolutionalneuralnetwork
AT manwong aslidingkernelcomputationinmemoryarchitectureforconvolutionalneuralnetwork
AT yushenhu slidingkernelcomputationinmemoryarchitectureforconvolutionalneuralnetwork
AT xinyingxie slidingkernelcomputationinmemoryarchitectureforconvolutionalneuralnetwork
AT tengtenglei slidingkernelcomputationinmemoryarchitectureforconvolutionalneuralnetwork
AT runxiaoshi slidingkernelcomputationinmemoryarchitectureforconvolutionalneuralnetwork
AT manwong slidingkernelcomputationinmemoryarchitectureforconvolutionalneuralnetwork