Directed Equilibrium Propagation Revisited

Equilibrium Propagation (EP) offers a biologically inspired alternative to backpropagation for training recurrent neural networks, but its reliance on symmetric feedback connections and stability limitations hinders practical adoption. The DirEcted EP (DEEP) model relaxes the symmetry constraint, ye...

Full description

Saved in:
Bibliographic Details
Main Authors: Pedro Costa, Pedro A. Santos
Format: Article
Language:English
Published: MDPI AG 2025-06-01
Series:Mathematics
Subjects:
Online Access:https://www.mdpi.com/2227-7390/13/11/1866
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Equilibrium Propagation (EP) offers a biologically inspired alternative to backpropagation for training recurrent neural networks, but its reliance on symmetric feedback connections and stability limitations hinders practical adoption. The DirEcted EP (DEEP) model relaxes the symmetry constraint, yet suffers from convergence issues and lacks a principled learning guarantee. In this work, we generalize DEEP by incorporating neuronal leakage, providing new convergence criteria for the network’s dynamics. We additionally propose a novel local learning rule closely linked to the objective function’s gradient and establish sufficient conditions for reliable learning in small networks. Our results resolve longstanding stability challenges and bring energy-based learning models closer to biologically plausible and provably effective neural computation.
ISSN:2227-7390