Convergence Rates of Gradient Methods for Convex Optimization in the Space of Measures
We study the convergence rate of Bregman gradient methods for convex optimization in the space of measures on a $d$-dimensional manifold. Under basic regularity assumptions, we show that the suboptimality gap at iteration $k$ is in $O(\log (k)k^{-1})$ for multiplicative updates, while it is in $O(k^...
Saved in:
Main Author: | Chizat, Lénaïc |
---|---|
Format: | Article |
Language: | English |
Published: |
Université de Montpellier
2023-01-01
|
Series: | Open Journal of Mathematical Optimization |
Subjects: | |
Online Access: | https://ojmo.centre-mersenne.org/articles/10.5802/ojmo.20/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Comparison of the efficiency of zero and first order minimization methods in neural networks
by: E. A. Gubareva, et al.
Published: (2022-12-01) -
Smoothing gradient descent algorithm for the composite sparse optimization
by: Wei Yang, et al.
Published: (2024-11-01) -
On complete convergence
for Lp-mixingales
by: Yijun Hu
Published: (2000-01-01) -
On some fixed point theorems in Banach spaces
by: D. V. Pai, et al.
Published: (1982-01-01) -
Generalized Difference Lacunary Weak Convergence of Sequences
by: Bibhajyoti Tamuli, et al.
Published: (2024-03-01)