Comparison of FORCE trained spiking and rate neural networks shows spiking networks learn slowly with noisy, cross-trial firing rates.

Training spiking recurrent neural networks (SRNNs) presents significant challenges compared to standard recurrent neural networks (RNNs) that model neural firing rates more directly. Here, we investigate the origins of these difficulties by training networks of spiking neurons and their parameter-ma...

Full description

Saved in:
Bibliographic Details
Main Authors: Thomas Robert Newton, Wilten Nicola
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2025-07-01
Series:PLoS Computational Biology
Online Access:https://doi.org/10.1371/journal.pcbi.1013224
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849228223883247616
author Thomas Robert Newton
Wilten Nicola
author_facet Thomas Robert Newton
Wilten Nicola
author_sort Thomas Robert Newton
collection DOAJ
description Training spiking recurrent neural networks (SRNNs) presents significant challenges compared to standard recurrent neural networks (RNNs) that model neural firing rates more directly. Here, we investigate the origins of these difficulties by training networks of spiking neurons and their parameter-matched instantaneous rate-based RNNs on supervised learning tasks. We applied FORCE training to leaky integrate-and-fire spiking networks and their matched rate-based counterparts across various dynamical tasks, keeping the FORCE hyperparameters identical. We found that at slow learning rates, spiking and rate networks behaved similarly: FORCE training identified highly correlated weight matrix solutions, and both network types exhibited overlapping hyperparameter regions for successful convergence. Remarkably, these weight solutions were largely interchangeable-weights trained in the spiking network could be transferred to the rate network and vice versa while preserving correct dynamical decoding. However, at fast learning rates, the correlation between learned solutions dropped sharply, and the solutions were no longer fully interchangeable. Despite this, rate networks still functioned well when their weight matrices were replaced with those learned from spiking networks. Additionally, the two network types exhibited distinct behaviours across different sizes: faster learning improved performance in rate networks but had little effect in spiking networks, aside from increasing instability. Through analytic derivation, we further show that slower learning rates in FORCE effectively act as a low-pass filter on the principal components of the neural bases, selectively stabilizing the dominant correlated components across spiking and rate networks. Our results indicate that some of the difficulties in training spiking networks stem from the inherent spike-time variability in spiking systems-variability that is not present in rate networks. These challenges can be mitigated in FORCE training by selecting appropriately slow learning rates. Moreover, our findings suggest that the decoding solutions learned by FORCE for spiking networks approximate a cross-trial firing rate-based decoding.
format Article
id doaj-art-5f52e2cd4fd140a99c9e34623f1694a3
institution Kabale University
issn 1553-734X
1553-7358
language English
publishDate 2025-07-01
publisher Public Library of Science (PLoS)
record_format Article
series PLoS Computational Biology
spelling doaj-art-5f52e2cd4fd140a99c9e34623f1694a32025-08-23T05:31:15ZengPublic Library of Science (PLoS)PLoS Computational Biology1553-734X1553-73582025-07-01217e101322410.1371/journal.pcbi.1013224Comparison of FORCE trained spiking and rate neural networks shows spiking networks learn slowly with noisy, cross-trial firing rates.Thomas Robert NewtonWilten NicolaTraining spiking recurrent neural networks (SRNNs) presents significant challenges compared to standard recurrent neural networks (RNNs) that model neural firing rates more directly. Here, we investigate the origins of these difficulties by training networks of spiking neurons and their parameter-matched instantaneous rate-based RNNs on supervised learning tasks. We applied FORCE training to leaky integrate-and-fire spiking networks and their matched rate-based counterparts across various dynamical tasks, keeping the FORCE hyperparameters identical. We found that at slow learning rates, spiking and rate networks behaved similarly: FORCE training identified highly correlated weight matrix solutions, and both network types exhibited overlapping hyperparameter regions for successful convergence. Remarkably, these weight solutions were largely interchangeable-weights trained in the spiking network could be transferred to the rate network and vice versa while preserving correct dynamical decoding. However, at fast learning rates, the correlation between learned solutions dropped sharply, and the solutions were no longer fully interchangeable. Despite this, rate networks still functioned well when their weight matrices were replaced with those learned from spiking networks. Additionally, the two network types exhibited distinct behaviours across different sizes: faster learning improved performance in rate networks but had little effect in spiking networks, aside from increasing instability. Through analytic derivation, we further show that slower learning rates in FORCE effectively act as a low-pass filter on the principal components of the neural bases, selectively stabilizing the dominant correlated components across spiking and rate networks. Our results indicate that some of the difficulties in training spiking networks stem from the inherent spike-time variability in spiking systems-variability that is not present in rate networks. These challenges can be mitigated in FORCE training by selecting appropriately slow learning rates. Moreover, our findings suggest that the decoding solutions learned by FORCE for spiking networks approximate a cross-trial firing rate-based decoding.https://doi.org/10.1371/journal.pcbi.1013224
spellingShingle Thomas Robert Newton
Wilten Nicola
Comparison of FORCE trained spiking and rate neural networks shows spiking networks learn slowly with noisy, cross-trial firing rates.
PLoS Computational Biology
title Comparison of FORCE trained spiking and rate neural networks shows spiking networks learn slowly with noisy, cross-trial firing rates.
title_full Comparison of FORCE trained spiking and rate neural networks shows spiking networks learn slowly with noisy, cross-trial firing rates.
title_fullStr Comparison of FORCE trained spiking and rate neural networks shows spiking networks learn slowly with noisy, cross-trial firing rates.
title_full_unstemmed Comparison of FORCE trained spiking and rate neural networks shows spiking networks learn slowly with noisy, cross-trial firing rates.
title_short Comparison of FORCE trained spiking and rate neural networks shows spiking networks learn slowly with noisy, cross-trial firing rates.
title_sort comparison of force trained spiking and rate neural networks shows spiking networks learn slowly with noisy cross trial firing rates
url https://doi.org/10.1371/journal.pcbi.1013224
work_keys_str_mv AT thomasrobertnewton comparisonofforcetrainedspikingandrateneuralnetworksshowsspikingnetworkslearnslowlywithnoisycrosstrialfiringrates
AT wiltennicola comparisonofforcetrainedspikingandrateneuralnetworksshowsspikingnetworkslearnslowlywithnoisycrosstrialfiringrates