Stochastic Gradient Descent-like relaxation is equivalent to Metropolis dynamics in discrete optimization and inference problems
Abstract Is Stochastic Gradient Descent (SGD) substantially different from Metropolis Monte Carlo dynamics? This is a fundamental question at the time of understanding the most used training algorithm in the field of Machine Learning, but it received no answer until now. Here we show that in discret...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2024-05-01
|
Series: | Scientific Reports |
Online Access: | https://doi.org/10.1038/s41598-024-62625-8 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Be the first to leave a comment!