An Accelerated Successive Convex Approximation Scheme With Exact Step Sizes for L1-Regression
We consider the minimization of <inline-formula><tex-math notation="LaTeX">$\ell _{1}$</tex-math></inline-formula>-regularized least-squares problems. A recent optimization approach uses successive convex approximations with an exact line search, which is highly com...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Open Journal of Signal Processing |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10840211/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1823859621879611392 |
---|---|
author | Lukas Schynol Moritz Hemsing Marius Pesavento |
author_facet | Lukas Schynol Moritz Hemsing Marius Pesavento |
author_sort | Lukas Schynol |
collection | DOAJ |
description | We consider the minimization of <inline-formula><tex-math notation="LaTeX">$\ell _{1}$</tex-math></inline-formula>-regularized least-squares problems. A recent optimization approach uses successive convex approximations with an exact line search, which is highly competitive, especially in sparse problem instances. This work proposes an acceleration scheme for the successive convex approximation technique with a negligible additional computational cost. We demonstrate this scheme by devising three related accelerated algorithms with provable convergence. The first introduces an additional descent step along the past optimization trajectory in the variable update that is inspired by Nesterov's accelerated gradient method and uses a closed-form step size. The second performs a simultaneous descent step along both the best response and the past trajectory, thereby finding a two-dimensional step size, also in closed-form. The third algorithm combines the previous two approaches. All algorithms are hyperparameter-free. Empirical results confirm that the acceleration approaches improve the convergence rate compared to benchmark algorithms, and that they retain the benefits of successive convex approximation also in non-sparse instances. |
format | Article |
id | doaj-art-8725a660106e4a4085f7f3e3872e3334 |
institution | Kabale University |
issn | 2644-1322 |
language | English |
publishDate | 2025-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Open Journal of Signal Processing |
spelling | doaj-art-8725a660106e4a4085f7f3e3872e33342025-02-11T00:01:47ZengIEEEIEEE Open Journal of Signal Processing2644-13222025-01-01618419310.1109/OJSP.2025.352887510840211An Accelerated Successive Convex Approximation Scheme With Exact Step Sizes for L1-RegressionLukas Schynol0https://orcid.org/0000-0001-8556-0311Moritz Hemsing1https://orcid.org/0009-0002-3499-8035Marius Pesavento2https://orcid.org/0000-0003-3395-2588Technical University of Darmstadt, Darmstadt, GermanyTechnical University of Darmstadt, Darmstadt, GermanyTechnical University of Darmstadt, Darmstadt, GermanyWe consider the minimization of <inline-formula><tex-math notation="LaTeX">$\ell _{1}$</tex-math></inline-formula>-regularized least-squares problems. A recent optimization approach uses successive convex approximations with an exact line search, which is highly competitive, especially in sparse problem instances. This work proposes an acceleration scheme for the successive convex approximation technique with a negligible additional computational cost. We demonstrate this scheme by devising three related accelerated algorithms with provable convergence. The first introduces an additional descent step along the past optimization trajectory in the variable update that is inspired by Nesterov's accelerated gradient method and uses a closed-form step size. The second performs a simultaneous descent step along both the best response and the past trajectory, thereby finding a two-dimensional step size, also in closed-form. The third algorithm combines the previous two approaches. All algorithms are hyperparameter-free. Empirical results confirm that the acceleration approaches improve the convergence rate compared to benchmark algorithms, and that they retain the benefits of successive convex approximation also in non-sparse instances.https://ieeexplore.ieee.org/document/10840211/Compressed sensingFISTALASSOL1-regularized least squaresNesterov's accelerated gradient methodoptimization |
spellingShingle | Lukas Schynol Moritz Hemsing Marius Pesavento An Accelerated Successive Convex Approximation Scheme With Exact Step Sizes for L1-Regression IEEE Open Journal of Signal Processing Compressed sensing FISTA LASSO L1-regularized least squares Nesterov's accelerated gradient method optimization |
title | An Accelerated Successive Convex Approximation Scheme With Exact Step Sizes for L1-Regression |
title_full | An Accelerated Successive Convex Approximation Scheme With Exact Step Sizes for L1-Regression |
title_fullStr | An Accelerated Successive Convex Approximation Scheme With Exact Step Sizes for L1-Regression |
title_full_unstemmed | An Accelerated Successive Convex Approximation Scheme With Exact Step Sizes for L1-Regression |
title_short | An Accelerated Successive Convex Approximation Scheme With Exact Step Sizes for L1-Regression |
title_sort | accelerated successive convex approximation scheme with exact step sizes for l1 regression |
topic | Compressed sensing FISTA LASSO L1-regularized least squares Nesterov's accelerated gradient method optimization |
url | https://ieeexplore.ieee.org/document/10840211/ |
work_keys_str_mv | AT lukasschynol anacceleratedsuccessiveconvexapproximationschemewithexactstepsizesforl1regression AT moritzhemsing anacceleratedsuccessiveconvexapproximationschemewithexactstepsizesforl1regression AT mariuspesavento anacceleratedsuccessiveconvexapproximationschemewithexactstepsizesforl1regression AT lukasschynol acceleratedsuccessiveconvexapproximationschemewithexactstepsizesforl1regression AT moritzhemsing acceleratedsuccessiveconvexapproximationschemewithexactstepsizesforl1regression AT mariuspesavento acceleratedsuccessiveconvexapproximationschemewithexactstepsizesforl1regression |