Reward-optimizing learning using stochastic release plasticity

Synaptic plasticity underlies adaptive learning in neural systems, offering a biologically plausible framework for reward-driven learning. However, a question remains: how can plasticity rules achieve robustness and effectiveness comparable to error backpropagation? In this study, we introduce Rewar...

Full description

Saved in:
Bibliographic Details
Main Authors: Yuhao Sun, Wantong Liao, Jinhao Li, Xinche Zhang, Guan Wang, Zhiyuan Ma, Sen Song
Format: Article
Language:English
Published: Frontiers Media S.A. 2025-08-01
Series:Frontiers in Neural Circuits
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fncir.2025.1618506/full
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Synaptic plasticity underlies adaptive learning in neural systems, offering a biologically plausible framework for reward-driven learning. However, a question remains: how can plasticity rules achieve robustness and effectiveness comparable to error backpropagation? In this study, we introduce Reward-Optimized Stochastic Release Plasticity (RSRP), a learning framework where synaptic release is modeled as a parameterized distribution. Utilizing natural gradient estimation, we derive a synaptic plasticity learning rule that effectively adapts to maximize reward signals. Our approach achieves competitive performance and demonstrates stability in reinforcement learning, comparable to Proximal Policy Optimization (PPO), while attaining accuracy comparable with error backpropagation in digit classification. Additionally, we identify reward regularization as a key stabilizing mechanism and validate our method in biologically plausible networks. Our findings suggest that RSRP offers a robust and effective plasticity learning rule, especially in a discontinuous reinforcement learning paradigm, with potential implications for both artificial intelligence and experimental neuroscience.
ISSN:1662-5110