Directed Equilibrium Propagation Revisited

Equilibrium Propagation (EP) offers a biologically inspired alternative to backpropagation for training recurrent neural networks, but its reliance on symmetric feedback connections and stability limitations hinders practical adoption. The DirEcted EP (DEEP) model relaxes the symmetry constraint, ye...

Full description

Saved in:
Bibliographic Details
Main Authors: Pedro Costa, Pedro A. Santos
Format: Article
Language:English
Published: MDPI AG 2025-06-01
Series:Mathematics
Subjects:
Online Access:https://www.mdpi.com/2227-7390/13/11/1866
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850129457530011648
author Pedro Costa
Pedro A. Santos
author_facet Pedro Costa
Pedro A. Santos
author_sort Pedro Costa
collection DOAJ
description Equilibrium Propagation (EP) offers a biologically inspired alternative to backpropagation for training recurrent neural networks, but its reliance on symmetric feedback connections and stability limitations hinders practical adoption. The DirEcted EP (DEEP) model relaxes the symmetry constraint, yet suffers from convergence issues and lacks a principled learning guarantee. In this work, we generalize DEEP by incorporating neuronal leakage, providing new convergence criteria for the network’s dynamics. We additionally propose a novel local learning rule closely linked to the objective function’s gradient and establish sufficient conditions for reliable learning in small networks. Our results resolve longstanding stability challenges and bring energy-based learning models closer to biologically plausible and provably effective neural computation.
format Article
id doaj-art-6863ee34e4604bcf8b10b415cd591f2d
institution OA Journals
issn 2227-7390
language English
publishDate 2025-06-01
publisher MDPI AG
record_format Article
series Mathematics
spelling doaj-art-6863ee34e4604bcf8b10b415cd591f2d2025-08-20T02:32:57ZengMDPI AGMathematics2227-73902025-06-011311186610.3390/math13111866Directed Equilibrium Propagation RevisitedPedro Costa0Pedro A. Santos1Instituto Superior Técnico, University of Lisbon, 1049-001 Lisbon, PortugalInstituto Superior Técnico, University of Lisbon, 1049-001 Lisbon, PortugalEquilibrium Propagation (EP) offers a biologically inspired alternative to backpropagation for training recurrent neural networks, but its reliance on symmetric feedback connections and stability limitations hinders practical adoption. The DirEcted EP (DEEP) model relaxes the symmetry constraint, yet suffers from convergence issues and lacks a principled learning guarantee. In this work, we generalize DEEP by incorporating neuronal leakage, providing new convergence criteria for the network’s dynamics. We additionally propose a novel local learning rule closely linked to the objective function’s gradient and establish sufficient conditions for reliable learning in small networks. Our results resolve longstanding stability challenges and bring energy-based learning models closer to biologically plausible and provably effective neural computation.https://www.mdpi.com/2227-7390/13/11/1866recurrent neural networksequilibrium propagationbiologically plausible algorithms
spellingShingle Pedro Costa
Pedro A. Santos
Directed Equilibrium Propagation Revisited
Mathematics
recurrent neural networks
equilibrium propagation
biologically plausible algorithms
title Directed Equilibrium Propagation Revisited
title_full Directed Equilibrium Propagation Revisited
title_fullStr Directed Equilibrium Propagation Revisited
title_full_unstemmed Directed Equilibrium Propagation Revisited
title_short Directed Equilibrium Propagation Revisited
title_sort directed equilibrium propagation revisited
topic recurrent neural networks
equilibrium propagation
biologically plausible algorithms
url https://www.mdpi.com/2227-7390/13/11/1866
work_keys_str_mv AT pedrocosta directedequilibriumpropagationrevisited
AT pedroasantos directedequilibriumpropagationrevisited