Device Specifications for Neural Network Training with Analog Resistive Cross‐Point Arrays Using Tiki‐Taka Algorithms

Recently, specialized training algorithms for analog cross‐point array‐based neural network accelerators have been introduced to counteract device non‐idealities such as update asymmetry and cycle‐to‐cycle variation, achieving software‐level performance in neural network training. However, a quantit...

Full description

Saved in:
Bibliographic Details
Main Authors: Jinho Byun, Seungkun Kim, Doyoon Kim, Jimin Lee, Wonjae Ji, Seyoung Kim
Format: Article
Language:English
Published: Wiley 2025-05-01
Series:Advanced Intelligent Systems
Subjects:
Online Access:https://doi.org/10.1002/aisy.202400543
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849327928072994816
author Jinho Byun
Seungkun Kim
Doyoon Kim
Jimin Lee
Wonjae Ji
Seyoung Kim
author_facet Jinho Byun
Seungkun Kim
Doyoon Kim
Jimin Lee
Wonjae Ji
Seyoung Kim
author_sort Jinho Byun
collection DOAJ
description Recently, specialized training algorithms for analog cross‐point array‐based neural network accelerators have been introduced to counteract device non‐idealities such as update asymmetry and cycle‐to‐cycle variation, achieving software‐level performance in neural network training. However, a quantitative analysis of how these algorithms affect the relaxation of device specifications is yet to be conducted. This study provides a detailed analysis by elucidating the device prerequisites for training with the Tiki‐Taka algorithm versions 1 (TTv1) and 2 (TTv2), which leverage the dynamics between multiple arrays to compensate for device non‐idealities. A multiparameter simulation is conducted to assess the impact of device non‐idealities, including asymmetry, retention, number of pulses, and cycle‐to‐cycle variation, on neural network training. Using pattern‐recognition accuracy as a performance metric, the required device specifications for each algorithm are revealed. The results demonstrate that the standard stochastic gradient descent algorithm requires stringent device specifications. Conversely, TTv2 permits more lenient device specifications than the TTv1 across all examined non‐idealities. The analysis provides guidelines for the development, optimization, and utilization of devices for high‐performance neural network training using Tiki‐Taka algorithms.
format Article
id doaj-art-fdfd171e25ef4d67ab9685b4d4e2b1eb
institution Kabale University
issn 2640-4567
language English
publishDate 2025-05-01
publisher Wiley
record_format Article
series Advanced Intelligent Systems
spelling doaj-art-fdfd171e25ef4d67ab9685b4d4e2b1eb2025-08-20T03:47:44ZengWileyAdvanced Intelligent Systems2640-45672025-05-0175n/an/a10.1002/aisy.202400543Device Specifications for Neural Network Training with Analog Resistive Cross‐Point Arrays Using Tiki‐Taka AlgorithmsJinho Byun0Seungkun Kim1Doyoon Kim2Jimin Lee3Wonjae Ji4Seyoung Kim5Department of Materials Science and Engineering Pohang University of Science and Technology (POSTECH) Pohang 37673 Republic of KoreaDepartment of Materials Science and Engineering Pohang University of Science and Technology (POSTECH) Pohang 37673 Republic of KoreaDepartment of Materials Science and Engineering Pohang University of Science and Technology (POSTECH) Pohang 37673 Republic of KoreaDepartment of Materials Science and Engineering Pohang University of Science and Technology (POSTECH) Pohang 37673 Republic of KoreaDepartment of Materials Science and Engineering Pohang University of Science and Technology (POSTECH) Pohang 37673 Republic of KoreaDepartment of Materials Science and Engineering Pohang University of Science and Technology (POSTECH) Pohang 37673 Republic of KoreaRecently, specialized training algorithms for analog cross‐point array‐based neural network accelerators have been introduced to counteract device non‐idealities such as update asymmetry and cycle‐to‐cycle variation, achieving software‐level performance in neural network training. However, a quantitative analysis of how these algorithms affect the relaxation of device specifications is yet to be conducted. This study provides a detailed analysis by elucidating the device prerequisites for training with the Tiki‐Taka algorithm versions 1 (TTv1) and 2 (TTv2), which leverage the dynamics between multiple arrays to compensate for device non‐idealities. A multiparameter simulation is conducted to assess the impact of device non‐idealities, including asymmetry, retention, number of pulses, and cycle‐to‐cycle variation, on neural network training. Using pattern‐recognition accuracy as a performance metric, the required device specifications for each algorithm are revealed. The results demonstrate that the standard stochastic gradient descent algorithm requires stringent device specifications. Conversely, TTv2 permits more lenient device specifications than the TTv1 across all examined non‐idealities. The analysis provides guidelines for the development, optimization, and utilization of devices for high‐performance neural network training using Tiki‐Taka algorithms.https://doi.org/10.1002/aisy.202400543analog in‐memory computingdeep learning acceleratordevice specificationneural networkTiki‐Taka algorithm
spellingShingle Jinho Byun
Seungkun Kim
Doyoon Kim
Jimin Lee
Wonjae Ji
Seyoung Kim
Device Specifications for Neural Network Training with Analog Resistive Cross‐Point Arrays Using Tiki‐Taka Algorithms
Advanced Intelligent Systems
analog in‐memory computing
deep learning accelerator
device specification
neural network
Tiki‐Taka algorithm
title Device Specifications for Neural Network Training with Analog Resistive Cross‐Point Arrays Using Tiki‐Taka Algorithms
title_full Device Specifications for Neural Network Training with Analog Resistive Cross‐Point Arrays Using Tiki‐Taka Algorithms
title_fullStr Device Specifications for Neural Network Training with Analog Resistive Cross‐Point Arrays Using Tiki‐Taka Algorithms
title_full_unstemmed Device Specifications for Neural Network Training with Analog Resistive Cross‐Point Arrays Using Tiki‐Taka Algorithms
title_short Device Specifications for Neural Network Training with Analog Resistive Cross‐Point Arrays Using Tiki‐Taka Algorithms
title_sort device specifications for neural network training with analog resistive cross point arrays using tiki taka algorithms
topic analog in‐memory computing
deep learning accelerator
device specification
neural network
Tiki‐Taka algorithm
url https://doi.org/10.1002/aisy.202400543
work_keys_str_mv AT jinhobyun devicespecificationsforneuralnetworktrainingwithanalogresistivecrosspointarraysusingtikitakaalgorithms
AT seungkunkim devicespecificationsforneuralnetworktrainingwithanalogresistivecrosspointarraysusingtikitakaalgorithms
AT doyoonkim devicespecificationsforneuralnetworktrainingwithanalogresistivecrosspointarraysusingtikitakaalgorithms
AT jiminlee devicespecificationsforneuralnetworktrainingwithanalogresistivecrosspointarraysusingtikitakaalgorithms
AT wonjaeji devicespecificationsforneuralnetworktrainingwithanalogresistivecrosspointarraysusingtikitakaalgorithms
AT seyoungkim devicespecificationsforneuralnetworktrainingwithanalogresistivecrosspointarraysusingtikitakaalgorithms