Scalable SHAP-Informed Neural Network

In the pursuit of scalable optimization strategies for neural networks, this study addresses the computational challenges posed by SHAP-informed learning methods introduced in prior work. Specifically, we extend the SHAP-based optimization family by incorporating two existing approximation methods,...

Full description

Saved in:
Bibliographic Details
Main Authors: Jarrod Graham, Victor S. Sheng
Format: Article
Language:English
Published: MDPI AG 2025-06-01
Series:Mathematics
Subjects:
Online Access:https://www.mdpi.com/2227-7390/13/13/2152
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849704284121202688
author Jarrod Graham
Victor S. Sheng
author_facet Jarrod Graham
Victor S. Sheng
author_sort Jarrod Graham
collection DOAJ
description In the pursuit of scalable optimization strategies for neural networks, this study addresses the computational challenges posed by SHAP-informed learning methods introduced in prior work. Specifically, we extend the SHAP-based optimization family by incorporating two existing approximation methods, C-SHAP and FastSHAP, to reduce training time while preserving the accuracy and generalization benefits of SHAP-based adjustments. C-SHAP leverages clustered SHAP values for efficient learning rate modulation, while FastSHAP provides rapid approximations of feature importance for gradient adjustment. Together, these methods significantly improve the practical usability of SHAP-informed neural network training by lowering computational overhead without major sacrifices in predictive performance. The experiments conducted across four datasets—Breast Cancer, Ames Housing, Adult Census, and California Housing—demonstrate that both C-SHAP and FastSHAP achieve substantial reductions in training time compared to original SHAP-based methods while maintaining competitive test losses, RMSE, and accuracy relative to baseline Adam optimization. Additionally, a hybrid approach combining C-SHAP and FastSHAP is explored as an avenue for further balancing performance and efficiency. These results highlight the feasibility of using feature-importance-based guidance to enhance optimization in neural networks at a reduced computational cost, paving the way for broader applicability of explainability-informed training strategies.
format Article
id doaj-art-83225442b9454ce2899dfcb3f0a1b415
institution DOAJ
issn 2227-7390
language English
publishDate 2025-06-01
publisher MDPI AG
record_format Article
series Mathematics
spelling doaj-art-83225442b9454ce2899dfcb3f0a1b4152025-08-20T03:16:47ZengMDPI AGMathematics2227-73902025-06-011313215210.3390/math13132152Scalable SHAP-Informed Neural NetworkJarrod Graham0Victor S. Sheng1Department of Computer Science, College of Engineering, Texas Tech University, Lubbock, TX 79409, USADepartment of Computer Science, College of Engineering, Texas Tech University, Lubbock, TX 79409, USAIn the pursuit of scalable optimization strategies for neural networks, this study addresses the computational challenges posed by SHAP-informed learning methods introduced in prior work. Specifically, we extend the SHAP-based optimization family by incorporating two existing approximation methods, C-SHAP and FastSHAP, to reduce training time while preserving the accuracy and generalization benefits of SHAP-based adjustments. C-SHAP leverages clustered SHAP values for efficient learning rate modulation, while FastSHAP provides rapid approximations of feature importance for gradient adjustment. Together, these methods significantly improve the practical usability of SHAP-informed neural network training by lowering computational overhead without major sacrifices in predictive performance. The experiments conducted across four datasets—Breast Cancer, Ames Housing, Adult Census, and California Housing—demonstrate that both C-SHAP and FastSHAP achieve substantial reductions in training time compared to original SHAP-based methods while maintaining competitive test losses, RMSE, and accuracy relative to baseline Adam optimization. Additionally, a hybrid approach combining C-SHAP and FastSHAP is explored as an avenue for further balancing performance and efficiency. These results highlight the feasibility of using feature-importance-based guidance to enhance optimization in neural networks at a reduced computational cost, paving the way for broader applicability of explainability-informed training strategies.https://www.mdpi.com/2227-7390/13/13/2152SHAPAdam optimizerlearning rate adjustmentsneural networksgrid searchperformance evaluation
spellingShingle Jarrod Graham
Victor S. Sheng
Scalable SHAP-Informed Neural Network
Mathematics
SHAP
Adam optimizer
learning rate adjustments
neural networks
grid search
performance evaluation
title Scalable SHAP-Informed Neural Network
title_full Scalable SHAP-Informed Neural Network
title_fullStr Scalable SHAP-Informed Neural Network
title_full_unstemmed Scalable SHAP-Informed Neural Network
title_short Scalable SHAP-Informed Neural Network
title_sort scalable shap informed neural network
topic SHAP
Adam optimizer
learning rate adjustments
neural networks
grid search
performance evaluation
url https://www.mdpi.com/2227-7390/13/13/2152
work_keys_str_mv AT jarrodgraham scalableshapinformedneuralnetwork
AT victorssheng scalableshapinformedneuralnetwork