Disturbance‐Aware On‐Chip Training with Mitigation Schemes for Massively Parallel Computing in Analog Deep Learning Accelerator

Abstract On‐chip training in analog in‐memory computing (AIMC) holds great promise for reducing data latency and enabling user‐specific learning. However, analog synaptic devices face significant challenges, particularly during parallel weight updates in crossbar arrays, where non‐uniform programmin...

Full description

Saved in:
Bibliographic Details
Main Authors: Jaehyeon Kang, Jongun Won, Narae Han, Sangjun Hong, Jee‐Eun Yang, Sangwook Kim, Sangbum Kim
Format: Article
Language:English
Published: Wiley 2025-06-01
Series:Advanced Science
Subjects:
Online Access:https://doi.org/10.1002/advs.202417635
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849687696310534144
author Jaehyeon Kang
Jongun Won
Narae Han
Sangjun Hong
Jee‐Eun Yang
Sangwook Kim
Sangbum Kim
author_facet Jaehyeon Kang
Jongun Won
Narae Han
Sangjun Hong
Jee‐Eun Yang
Sangwook Kim
Sangbum Kim
author_sort Jaehyeon Kang
collection DOAJ
description Abstract On‐chip training in analog in‐memory computing (AIMC) holds great promise for reducing data latency and enabling user‐specific learning. However, analog synaptic devices face significant challenges, particularly during parallel weight updates in crossbar arrays, where non‐uniform programming and disturbances often arise. Despite their importance, the disturbances that occur during training are difficult to quantify based on a clear mechanism, and as a result, their impact on training performance remains underexplored. This work precisely identifies and quantifies the disturbance effects in 6T1C synaptic devices based on oxide semiconductors and capacitors, whose endurance and variation have been validated but encounter worsening disturbance effects with device scaling. By clarifying the disturbance mechanism, three simple operational schemes are proposed to mitigate these effects, with their efficacy validated through device array measurements. Furthermore, to evaluate learning feasibility in large‐scale arrays, real‐time disturbance‐aware training simulations are conducted by mapping synaptic arrays to convolutional neural networks for the CIFAR‐10 dataset. A software‐equivalent accuracy is achieved even under intensified disturbances, using a cell capacitor size of 50fF, comparable to dynamic random‐access memory. Combined with the inherent advantages of endurance and variation, this approach offers a practical solution for hardware‐based deep learning based on the 6T1C synaptic array.
format Article
id doaj-art-d1bfbf3b951040bf9f44533150280cfa
institution DOAJ
issn 2198-3844
language English
publishDate 2025-06-01
publisher Wiley
record_format Article
series Advanced Science
spelling doaj-art-d1bfbf3b951040bf9f44533150280cfa2025-08-20T03:22:15ZengWileyAdvanced Science2198-38442025-06-011223n/an/a10.1002/advs.202417635Disturbance‐Aware On‐Chip Training with Mitigation Schemes for Massively Parallel Computing in Analog Deep Learning AcceleratorJaehyeon Kang0Jongun Won1Narae Han2Sangjun Hong3Jee‐Eun Yang4Sangwook Kim5Sangbum Kim6Department of Material Science & Engineering Inter‐university Semiconductor Research Center (ISRC) Research Institute of Advanced Materials (RIAM) Seoul National University Seoul 08826 Republic of KoreaDepartment of Material Science & Engineering Inter‐university Semiconductor Research Center (ISRC) Research Institute of Advanced Materials (RIAM) Seoul National University Seoul 08826 Republic of KoreaDepartment of Material Science & Engineering Inter‐university Semiconductor Research Center (ISRC) Research Institute of Advanced Materials (RIAM) Seoul National University Seoul 08826 Republic of KoreaDevice Solutions Samsung Electronics Hwaseong 18479 Republic of KoreaSamsung Advanced Institute of Technology (SAIT) Samsung Electronics Suwon‐si 16678 Republic of KoreaSamsung Advanced Institute of Technology (SAIT) Samsung Electronics Suwon‐si 16678 Republic of KoreaDepartment of Material Science & Engineering Inter‐university Semiconductor Research Center (ISRC) Research Institute of Advanced Materials (RIAM) Seoul National University Seoul 08826 Republic of KoreaAbstract On‐chip training in analog in‐memory computing (AIMC) holds great promise for reducing data latency and enabling user‐specific learning. However, analog synaptic devices face significant challenges, particularly during parallel weight updates in crossbar arrays, where non‐uniform programming and disturbances often arise. Despite their importance, the disturbances that occur during training are difficult to quantify based on a clear mechanism, and as a result, their impact on training performance remains underexplored. This work precisely identifies and quantifies the disturbance effects in 6T1C synaptic devices based on oxide semiconductors and capacitors, whose endurance and variation have been validated but encounter worsening disturbance effects with device scaling. By clarifying the disturbance mechanism, three simple operational schemes are proposed to mitigate these effects, with their efficacy validated through device array measurements. Furthermore, to evaluate learning feasibility in large‐scale arrays, real‐time disturbance‐aware training simulations are conducted by mapping synaptic arrays to convolutional neural networks for the CIFAR‐10 dataset. A software‐equivalent accuracy is achieved even under intensified disturbances, using a cell capacitor size of 50fF, comparable to dynamic random‐access memory. Combined with the inherent advantages of endurance and variation, this approach offers a practical solution for hardware‐based deep learning based on the 6T1C synaptic array.https://doi.org/10.1002/advs.202417635Analog in‐memory computingDisturbanceDisturbance‐aware trainingHalf‐selectedIGZO TFTNeuromorphic
spellingShingle Jaehyeon Kang
Jongun Won
Narae Han
Sangjun Hong
Jee‐Eun Yang
Sangwook Kim
Sangbum Kim
Disturbance‐Aware On‐Chip Training with Mitigation Schemes for Massively Parallel Computing in Analog Deep Learning Accelerator
Advanced Science
Analog in‐memory computing
Disturbance
Disturbance‐aware training
Half‐selected
IGZO TFT
Neuromorphic
title Disturbance‐Aware On‐Chip Training with Mitigation Schemes for Massively Parallel Computing in Analog Deep Learning Accelerator
title_full Disturbance‐Aware On‐Chip Training with Mitigation Schemes for Massively Parallel Computing in Analog Deep Learning Accelerator
title_fullStr Disturbance‐Aware On‐Chip Training with Mitigation Schemes for Massively Parallel Computing in Analog Deep Learning Accelerator
title_full_unstemmed Disturbance‐Aware On‐Chip Training with Mitigation Schemes for Massively Parallel Computing in Analog Deep Learning Accelerator
title_short Disturbance‐Aware On‐Chip Training with Mitigation Schemes for Massively Parallel Computing in Analog Deep Learning Accelerator
title_sort disturbance aware on chip training with mitigation schemes for massively parallel computing in analog deep learning accelerator
topic Analog in‐memory computing
Disturbance
Disturbance‐aware training
Half‐selected
IGZO TFT
Neuromorphic
url https://doi.org/10.1002/advs.202417635
work_keys_str_mv AT jaehyeonkang disturbanceawareonchiptrainingwithmitigationschemesformassivelyparallelcomputinginanalogdeeplearningaccelerator
AT jongunwon disturbanceawareonchiptrainingwithmitigationschemesformassivelyparallelcomputinginanalogdeeplearningaccelerator
AT naraehan disturbanceawareonchiptrainingwithmitigationschemesformassivelyparallelcomputinginanalogdeeplearningaccelerator
AT sangjunhong disturbanceawareonchiptrainingwithmitigationschemesformassivelyparallelcomputinginanalogdeeplearningaccelerator
AT jeeeunyang disturbanceawareonchiptrainingwithmitigationschemesformassivelyparallelcomputinginanalogdeeplearningaccelerator
AT sangwookkim disturbanceawareonchiptrainingwithmitigationschemesformassivelyparallelcomputinginanalogdeeplearningaccelerator
AT sangbumkim disturbanceawareonchiptrainingwithmitigationschemesformassivelyparallelcomputinginanalogdeeplearningaccelerator