Disturbance‐Aware On‐Chip Training with Mitigation Schemes for Massively Parallel Computing in Analog Deep Learning Accelerator

Abstract On‐chip training in analog in‐memory computing (AIMC) holds great promise for reducing data latency and enabling user‐specific learning. However, analog synaptic devices face significant challenges, particularly during parallel weight updates in crossbar arrays, where non‐uniform programmin...

Full description

Saved in:
Bibliographic Details
Main Authors: Jaehyeon Kang, Jongun Won, Narae Han, Sangjun Hong, Jee‐Eun Yang, Sangwook Kim, Sangbum Kim
Format: Article
Language:English
Published: Wiley 2025-06-01
Series:Advanced Science
Subjects:
Online Access:https://doi.org/10.1002/advs.202417635
Tags: Add Tag
No Tags, Be the first to tag this record!