C-SHAP: A Hybrid Method for Fast and Efficient Interpretability
Model interpretability is essential in machine learning, particularly for applications in critical fields like healthcare, where understanding model decisions is paramount. While SHAP (SHapley Additive exPlanations) has proven to be a robust tool for explaining machine learning predictions, its high...
Saved in:
| Main Authors: | Golshid Ranjbaran, Diego Reforgiato Recupero, Chanchal K. Roy, Kevin A. Schneider |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-01-01
|
| Series: | Applied Sciences |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2076-3417/15/2/672 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Predictive model of ulcerative colitis syndrome with ensemble learning and interpretability methods
by: Ling Zhu, et al.
Published: (2025-07-01) -
A data-centric and interpretable EEG framework for depression severity grading using SHAP-based insights
by: Anruo Shen, et al.
Published: (2025-05-01) -
Explainable AI for Forensic Analysis: A Comparative Study of SHAP and LIME in Intrusion Detection Models
by: Pamela Hermosilla, et al.
Published: (2025-06-01) -
An Interpretable Machine Learning Framework for Analyzing the Interaction Between Cardiorespiratory Diseases and Meteo-Pollutant Sensor Data
by: Vito Telesca, et al.
Published: (2025-08-01) -
Integrating SHAP analysis with machine learning to predict postpartum hemorrhage in vaginal births
by: Zixuan Song, et al.
Published: (2025-05-01)