Stochastic Gradient Descent-like relaxation is equivalent to Metropolis dynamics in discrete optimization and inference problems
Abstract Is Stochastic Gradient Descent (SGD) substantially different from Metropolis Monte Carlo dynamics? This is a fundamental question at the time of understanding the most used training algorithm in the field of Machine Learning, but it received no answer until now. Here we show that in discret...
Saved in:
| Main Authors: | Maria Chiara Angelini, Angelo Giorgio Cavaliere, Raffaele Marino, Federico Ricci-Tersenghi |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2024-05-01
|
| Series: | Scientific Reports |
| Online Access: | https://doi.org/10.1038/s41598-024-62625-8 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Stochastic Gradient Descent for Kernel-Based Maximum Correntropy Criterion
by: Tiankai Li, et al.
Published: (2024-12-01) -
An Improvement of Stochastic Gradient Descent Approach for Mean-Variance Portfolio Optimization Problem
by: Stephanie S. W. Su, et al.
Published: (2021-01-01) -
A Novel Sine Step Size for Warm-Restart Stochastic Gradient Descent
by: Mahsa Soheil Shamaee, et al.
Published: (2024-12-01) -
Stochastic gradient descent algorithm preserving differential privacy in MapReduce framework
by: Yihan YU, et al.
Published: (2018-01-01) -
Non-Iterative Phase-Only Hologram Generation via Stochastic Gradient Descent Optimization
by: Alejandro Velez-Zea, et al.
Published: (2025-05-01)