A variable metric proximal stochastic gradient method: An application to classification problems

Due to the continued success of machine learning and deep learning in particular, supervised classification problems are ubiquitous in numerous scientific fields. Training these models typically involves the minimization of the empirical risk over large data sets along with a possibly non-differenti...

Full description

Saved in:
Bibliographic Details
Main Authors: Pasquale Cascarano, Giorgia Franchini, Erich Kobler, Federica Porta, Andrea Sebastiani
Format: Article
Language:English
Published: Elsevier 2024-01-01
Series:EURO Journal on Computational Optimization
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2192440624000054
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Due to the continued success of machine learning and deep learning in particular, supervised classification problems are ubiquitous in numerous scientific fields. Training these models typically involves the minimization of the empirical risk over large data sets along with a possibly non-differentiable regularization. In this paper, we introduce a stochastic gradient method for the considered classification problem. To control the variance of the objective's gradients, we use an automatic sample size selection along with a variable metric to precondition the stochastic gradient directions. Further, we utilize a non-monotone line search to automatize step size selection. Convergence results are provided for both convex and non-convex objective functions. Extensive numerical experiments verify that the suggested approach performs on par with state-of-the-art methods for training both statistical models for binary classification and artificial neural networks for multi-class image classification. The code is publicly available at https://github.com/koblererich/lisavm.
ISSN:2192-4406