Comparison of the efficiency of zero and first order minimization methods in neural networks
To minimize the objective function in neural networks, first-order methods are usually used, which involve the repeated calculation of the gradient. The number of variables in modern neural networks can be many thousands and even millions. Numerous experiments show that the analytical calculation ti...
Saved in:
Main Authors: | E. A. Gubareva, S. I. Khashin, E. S. Shemyakova |
---|---|
Format: | Article |
Language: | English |
Published: |
Publishing House of the State University of Management
2022-12-01
|
Series: | Вестник университета |
Subjects: | |
Online Access: | https://vestnik.guu.ru/jour/article/view/3952 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Convergence Rates of Gradient Methods for Convex Optimization in the Space of Measures
by: Chizat, Lénaïc
Published: (2023-01-01) -
Forest fire risk assessment model optimized by stochastic average gradient descent
by: Zexin Fu, et al.
Published: (2025-01-01) -
Tight analyses for subgradient descent I: Lower bounds
by: Harvey, Nicholas J. A., et al.
Published: (2024-07-01) -
Solving Spatial Optimization Problems via Lagrangian Relaxation and Automatic Gradient Computation
by: Zhen Lei, et al.
Published: (2025-01-01) -
Loss shaping enhances exact gradient learning with Eventprop in spiking neural networks
by: Thomas Nowotny, et al.
Published: (2025-01-01)