Curvature-Adaptive Learning Rate Optimizer: Theoretical Insights and Empirical Evaluation on Neural Network Training

Optimizing neural networks often encounters challenges such as saddle points, plateaus, and ill-conditioned curvature, limiting the effectiveness of standard optimizers like Adam, Nadam, and RMSProp. To address these limitations, we propose the Curvature-Adaptive Learning Rate (CALR) optimizer, a n...

Full description

Saved in:
Bibliographic Details
Main Author: Kehelwala Dewage Gayan Maduranga
Format: Article
Language:English
Published: LibraryPress@UF 2025-05-01
Series:Proceedings of the International Florida Artificial Intelligence Research Society Conference
Online Access:https://journals.flvc.org/FLAIRS/article/view/138986
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Optimizing neural networks often encounters challenges such as saddle points, plateaus, and ill-conditioned curvature, limiting the effectiveness of standard optimizers like Adam, Nadam, and RMSProp. To address these limitations, we propose the Curvature-Adaptive Learning Rate (CALR) optimizer, a novel method that leverages local curvature estimates to dynamically adjust learning rates. CALR, along with its variants incorporating gradient clipping and cosine annealing schedules, offers enhanced robustness and faster convergence across diverse optimization tasks. Theoretical analysis confirms CALR’s convergence properties, while empirical evaluations on benchmark functions—Rosenbrock, Himmelblau, and Saddle Point—highlight its efficiency in complex optimization landscapes. Furthermore, CALR demonstrates superior performance on neural network training tasks using MNIST and CIFAR-10 datasets, achieving faster convergence, lower loss, and better generalization compared to traditional optimizers. These results establish CALR as a promising optimization strategy for challenging neural network training problems.
ISSN:2334-0754
2334-0762