Machine Learning and Deep Learning Optimization Algorithms for Unconstrained Convex Optimization Problem

This paper conducts a thorough comparative analysis of optimization algorithms for an unconstrained convex optimization problem. It contrasts traditional methods like Gradient Descent (GD) and Nesterov Accelerated Gradient (NAG) with modern techniques such as Adaptive Moment Estimation (Adam), Long...

Full description

Saved in:
Bibliographic Details
Main Authors: Kainat Naeem, Amal Bukhari, Ali Daud, Tariq Alsahfi, Bader Alshemaimri, Mousa Alhajlah
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10815950/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper conducts a thorough comparative analysis of optimization algorithms for an unconstrained convex optimization problem. It contrasts traditional methods like Gradient Descent (GD) and Nesterov Accelerated Gradient (NAG) with modern techniques such as Adaptive Moment Estimation (Adam), Long Short-Term Memory (LSTM) and Multilayer Perceptron (MLP). Through empirical experiments, convergence speed, solution accuracy and robustness, is evaluated providing insights to aid algorithm selection. The convergence dynamics of convex optimization, is explored analyzing classical algorithms and contemporary neural network (NN) methodologies. The study concludes with a comparative assessment of these algorithms performance metrics and their respective strengths and weaknesses.
ISSN:2169-3536