Distributed representations enable robust multi-timescale symbolic computation in neuromorphic hardware
Programming recurrent spiking neural networks (RSNNs) to robustly perform multi-timescale computation remains a difficult challenge. To address this, we describe a single-shot weight learning scheme to embed robust multi-timescale dynamics into attractor-based RSNNs, by exploiting the properties of...
Saved in:
Main Authors: | Madison Cotteret, Hugh Greatorex, Alpha Renner, Junren Chen, Emre Neftci, Huaqiang Wu, Giacomo Indiveri, Martin Ziegler, Elisabetta Chicca |
---|---|
Format: | Article |
Language: | English |
Published: |
IOP Publishing
2025-01-01
|
Series: | Neuromorphic Computing and Engineering |
Subjects: | |
Online Access: | https://doi.org/10.1088/2634-4386/ada851 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Approximate CNN Hardware Accelerators for Resource Constrained Devices
by: P Thejaswini, et al.
Published: (2025-01-01) -
Efficient Hardware Implementation of a Multi-Layer Gradient-Free Online-Trainable Spiking Neural Network on FPGA
by: Ali Mehrabi, et al.
Published: (2024-01-01) -
Control and diagnostics of faults in hardware-software complex
by: D. A. Pankov, et al.
Published: (2018-04-01) -
The use of the SWARA method for the selection of cryptocurrency hardware
by: Trišić Marko
Published: (2018-01-01) -
Hybrid multi‐level hardware Trojan detection platform for gate‐level netlists based on XGBoost
by: Ying Zhang, et al.
Published: (2022-03-01)