Scalable SHAP-Informed Neural Network

In the pursuit of scalable optimization strategies for neural networks, this study addresses the computational challenges posed by SHAP-informed learning methods introduced in prior work. Specifically, we extend the SHAP-based optimization family by incorporating two existing approximation methods,...

Full description

Saved in:
Bibliographic Details
Main Authors: Jarrod Graham, Victor S. Sheng
Format: Article
Language:English
Published: MDPI AG 2025-06-01
Series:Mathematics
Subjects:
Online Access:https://www.mdpi.com/2227-7390/13/13/2152
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In the pursuit of scalable optimization strategies for neural networks, this study addresses the computational challenges posed by SHAP-informed learning methods introduced in prior work. Specifically, we extend the SHAP-based optimization family by incorporating two existing approximation methods, C-SHAP and FastSHAP, to reduce training time while preserving the accuracy and generalization benefits of SHAP-based adjustments. C-SHAP leverages clustered SHAP values for efficient learning rate modulation, while FastSHAP provides rapid approximations of feature importance for gradient adjustment. Together, these methods significantly improve the practical usability of SHAP-informed neural network training by lowering computational overhead without major sacrifices in predictive performance. The experiments conducted across four datasets—Breast Cancer, Ames Housing, Adult Census, and California Housing—demonstrate that both C-SHAP and FastSHAP achieve substantial reductions in training time compared to original SHAP-based methods while maintaining competitive test losses, RMSE, and accuracy relative to baseline Adam optimization. Additionally, a hybrid approach combining C-SHAP and FastSHAP is explored as an avenue for further balancing performance and efficiency. These results highlight the feasibility of using feature-importance-based guidance to enhance optimization in neural networks at a reduced computational cost, paving the way for broader applicability of explainability-informed training strategies.
ISSN:2227-7390