Optimizing the learning rate for adaptive estimation of neural encoding models.

Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model...

Full description

Saved in:
Bibliographic Details
Main Authors: Han-Lin Hsieh, Maryam M Shanechi
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2018-05-01
Series:PLoS Computational Biology
Online Access:https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1006168&type=printable
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850076548428726272
author Han-Lin Hsieh
Maryam M Shanechi
author_facet Han-Lin Hsieh
Maryam M Shanechi
author_sort Han-Lin Hsieh
collection DOAJ
description Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains.
format Article
id doaj-art-604e538a08684b37b35b8e4501ae3e1a
institution DOAJ
issn 1553-734X
1553-7358
language English
publishDate 2018-05-01
publisher Public Library of Science (PLoS)
record_format Article
series PLoS Computational Biology
spelling doaj-art-604e538a08684b37b35b8e4501ae3e1a2025-08-20T02:46:00ZengPublic Library of Science (PLoS)PLoS Computational Biology1553-734X1553-73582018-05-01145e100616810.1371/journal.pcbi.1006168Optimizing the learning rate for adaptive estimation of neural encoding models.Han-Lin HsiehMaryam M ShanechiClosed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains.https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1006168&type=printable
spellingShingle Han-Lin Hsieh
Maryam M Shanechi
Optimizing the learning rate for adaptive estimation of neural encoding models.
PLoS Computational Biology
title Optimizing the learning rate for adaptive estimation of neural encoding models.
title_full Optimizing the learning rate for adaptive estimation of neural encoding models.
title_fullStr Optimizing the learning rate for adaptive estimation of neural encoding models.
title_full_unstemmed Optimizing the learning rate for adaptive estimation of neural encoding models.
title_short Optimizing the learning rate for adaptive estimation of neural encoding models.
title_sort optimizing the learning rate for adaptive estimation of neural encoding models
url https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1006168&type=printable
work_keys_str_mv AT hanlinhsieh optimizingthelearningrateforadaptiveestimationofneuralencodingmodels
AT maryammshanechi optimizingthelearningrateforadaptiveestimationofneuralencodingmodels