Synthesizing Explainability Across Multiple ML Models for Structured Data
Explainable Machine Learning (XML) in high-stakes domains demands reproducible methods to aggregate feature importance across multiple models applied to the same structured dataset. We propose the Weighted Importance Score and Frequency Count (WISFC) framework, which combines importance magnitude an...
Saved in:
| Main Authors: | , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-06-01
|
| Series: | Algorithms |
| Subjects: | |
| Online Access: | https://www.mdpi.com/1999-4893/18/6/368 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849472595683966976 |
|---|---|
| author | Emir Veledar Lili Zhou Omar Veledar Hannah Gardener Carolina M. Gutierrez Jose G. Romano Tatjana Rundek |
| author_facet | Emir Veledar Lili Zhou Omar Veledar Hannah Gardener Carolina M. Gutierrez Jose G. Romano Tatjana Rundek |
| author_sort | Emir Veledar |
| collection | DOAJ |
| description | Explainable Machine Learning (XML) in high-stakes domains demands reproducible methods to aggregate feature importance across multiple models applied to the same structured dataset. We propose the Weighted Importance Score and Frequency Count (WISFC) framework, which combines importance magnitude and consistency by aggregating ranked outputs from diverse explainers. WISFC assigns a weighted score to each feature based on its rank and frequency across model-explainer pairs, providing a robust ensemble feature-importance ranking. Unlike simple consensus voting or ranking heuristics that are insufficient for capturing complex relationships among different explainer outputs, WISFC offers a more principled approach to reconciling and aggregating this information. By aggregating many “weak signals” from brute-force modeling runs, WISFC can surface a stronger consensus on which variables matter most. The framework is designed to be reproducible and generalizable, capable of taking important outputs from any set of machine-learning models and producing an aggregated ranking highlighting consistently important features. This approach acknowledges that any single model is a simplification of complex, multidimensional phenomena; using multiple diverse models, each optimized from a different perspective, WISFC systematically captures different facets of the problem space to create a more structured and comprehensive view. As a consequence, this study offers a useful strategy for researchers and practitioners who seek innovative ways of exploring complex systems, not by discovering entirely new variables but by introducing a novel mindset for systematically combining multiple modeling perspectives. |
| format | Article |
| id | doaj-art-baac3101d52b4bc59c86c76f8dd37d16 |
| institution | Kabale University |
| issn | 1999-4893 |
| language | English |
| publishDate | 2025-06-01 |
| publisher | MDPI AG |
| record_format | Article |
| series | Algorithms |
| spelling | doaj-art-baac3101d52b4bc59c86c76f8dd37d162025-08-20T03:24:29ZengMDPI AGAlgorithms1999-48932025-06-0118636810.3390/a18060368Synthesizing Explainability Across Multiple ML Models for Structured DataEmir Veledar0Lili Zhou1Omar Veledar2Hannah Gardener3Carolina M. Gutierrez4Jose G. Romano5Tatjana Rundek6Department of Neurology, University of Miami Miller School of Medicine, 1120 NW 14th Street, Suite 1370, Miami, FL 33136, USADepartment of Neurology, University of Miami Miller School of Medicine, 1120 NW 14th Street, Suite 1370, Miami, FL 33136, USABeevadoo e.U., Pfeifferhofweg 3b, 8045 Graz, AustriaDepartment of Neurology, University of Miami Miller School of Medicine, 1120 NW 14th Street, Suite 1370, Miami, FL 33136, USADepartment of Neurology, University of Miami Miller School of Medicine, 1120 NW 14th Street, Suite 1370, Miami, FL 33136, USADepartment of Neurology, University of Miami Miller School of Medicine, 1120 NW 14th Street, Suite 1370, Miami, FL 33136, USADepartment of Neurology, University of Miami Miller School of Medicine, 1120 NW 14th Street, Suite 1370, Miami, FL 33136, USAExplainable Machine Learning (XML) in high-stakes domains demands reproducible methods to aggregate feature importance across multiple models applied to the same structured dataset. We propose the Weighted Importance Score and Frequency Count (WISFC) framework, which combines importance magnitude and consistency by aggregating ranked outputs from diverse explainers. WISFC assigns a weighted score to each feature based on its rank and frequency across model-explainer pairs, providing a robust ensemble feature-importance ranking. Unlike simple consensus voting or ranking heuristics that are insufficient for capturing complex relationships among different explainer outputs, WISFC offers a more principled approach to reconciling and aggregating this information. By aggregating many “weak signals” from brute-force modeling runs, WISFC can surface a stronger consensus on which variables matter most. The framework is designed to be reproducible and generalizable, capable of taking important outputs from any set of machine-learning models and producing an aggregated ranking highlighting consistently important features. This approach acknowledges that any single model is a simplification of complex, multidimensional phenomena; using multiple diverse models, each optimized from a different perspective, WISFC systematically captures different facets of the problem space to create a more structured and comprehensive view. As a consequence, this study offers a useful strategy for researchers and practitioners who seek innovative ways of exploring complex systems, not by discovering entirely new variables but by introducing a novel mindset for systematically combining multiple modeling perspectives.https://www.mdpi.com/1999-4893/18/6/368explainable machine learningfeature-importance aggregationensemble interpretabilitysmall-data settingsWISFC |
| spellingShingle | Emir Veledar Lili Zhou Omar Veledar Hannah Gardener Carolina M. Gutierrez Jose G. Romano Tatjana Rundek Synthesizing Explainability Across Multiple ML Models for Structured Data Algorithms explainable machine learning feature-importance aggregation ensemble interpretability small-data settings WISFC |
| title | Synthesizing Explainability Across Multiple ML Models for Structured Data |
| title_full | Synthesizing Explainability Across Multiple ML Models for Structured Data |
| title_fullStr | Synthesizing Explainability Across Multiple ML Models for Structured Data |
| title_full_unstemmed | Synthesizing Explainability Across Multiple ML Models for Structured Data |
| title_short | Synthesizing Explainability Across Multiple ML Models for Structured Data |
| title_sort | synthesizing explainability across multiple ml models for structured data |
| topic | explainable machine learning feature-importance aggregation ensemble interpretability small-data settings WISFC |
| url | https://www.mdpi.com/1999-4893/18/6/368 |
| work_keys_str_mv | AT emirveledar synthesizingexplainabilityacrossmultiplemlmodelsforstructureddata AT lilizhou synthesizingexplainabilityacrossmultiplemlmodelsforstructureddata AT omarveledar synthesizingexplainabilityacrossmultiplemlmodelsforstructureddata AT hannahgardener synthesizingexplainabilityacrossmultiplemlmodelsforstructureddata AT carolinamgutierrez synthesizingexplainabilityacrossmultiplemlmodelsforstructureddata AT josegromano synthesizingexplainabilityacrossmultiplemlmodelsforstructureddata AT tatjanarundek synthesizingexplainabilityacrossmultiplemlmodelsforstructureddata |