Synthesizing Explainability Across Multiple ML Models for Structured Data
Explainable Machine Learning (XML) in high-stakes domains demands reproducible methods to aggregate feature importance across multiple models applied to the same structured dataset. We propose the Weighted Importance Score and Frequency Count (WISFC) framework, which combines importance magnitude an...
Saved in:
| Main Authors: | Emir Veledar, Lili Zhou, Omar Veledar, Hannah Gardener, Carolina M. Gutierrez, Jose G. Romano, Tatjana Rundek |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-06-01
|
| Series: | Algorithms |
| Subjects: | |
| Online Access: | https://www.mdpi.com/1999-4893/18/6/368 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
A Study on the Application of Explainable AI on Ensemble Models for Predictive Analysis of Chronic Kidney Disease
by: K. M. Tawsik Jawad, et al.
Published: (2025-01-01) -
Explainability and Interpretability in Concept and Data Drift: A Systematic Literature Review
by: Daniele Pelosi, et al.
Published: (2025-07-01) -
XAI Unveiled: Revealing the Potential of Explainable AI in Medicine: A Systematic Review
by: Noemi Scarpato, et al.
Published: (2024-01-01) -
Rough Set Theory and Soft Computing Methods for Building Explainable and Interpretable AI/ML Models
by: Sami Naouali, et al.
Published: (2025-05-01) -
Explainable Artificial Intelligence in Paediatric: Challenges for the Future
by: Ahmed M. Salih, et al.
Published: (2024-12-01)