Showing 21 - 40 results of 43 for search 'shapley adaptive explanation algorithm', query time: 0.11s Refine Results
  1. 21

    Machine learning for predicting neoadjuvant chemotherapy effectiveness using ultrasound radiomics features and routine clinical data of patients with breast cancer by Pu Zhou, Pu Zhou, Hongyan Qian, Pengfei Zhu, Jiangyuan Ben, Jiangyuan Ben, Guifang Chen, Qiuyi Chen, Lingli Chen, Jia Chen, Ying He, Ying He

    Published 2025-01-01
    “…Subsequently, construction of clinical predictive models and Rad score joint clinical predictive models using ML algorithms for optimal diagnostic performance. The diagnostic process of the ML model was visualized and analyzed using SHapley Additive exPlanation (SHAP).ResultsOut of 231 participants with BC, 98 (42.42%) achieved pCR, and 133 (57.58%) did not. …”
    Get full text
    Article
  2. 22

    Mapping and understanding the regional farmland SOC distribution in southern China using a Bayesian spatial model by Bifeng Hu, Yibo Geng, Hanjie Ni, Zhou Shi, Zheng Wang, Nan Wang, Jipeng Luo, Modian Xie, Qian Zou, Thomas Optiz, Hongyi Li

    Published 2025-08-01
    “…Finally, an interpretable machine learning model, the SHapley Additive exPlanation (SHAP), is used to quantify the environmental covariates’ contribution to mapping SOC, as well as mapping spatial varying primary covariates for predicting SOC in the study area. …”
    Get full text
    Article
  3. 23

    An integrated IKOA-CNN-BiGRU-Attention framework with SHAP explainability for high-precision debris flow hazard prediction in the Nujiang river basin, China. by Hao Yang, Tianlong Wang, Nikita Igorevich Fomin, Shuoting Xiao, Liang Liu

    Published 2025-01-01
    “…Model explainability is enhanced using SHapley Additive exPlanations (SHAP), which quantify the influence of key factors. …”
    Get full text
    Article
  4. 24

    Improved CKD classification based on explainable artificial intelligence with extra trees and BBFS by Ahmed M. Elshewey, Enas Selem, Amira Hassan Abed

    Published 2025-05-01
    “…The model applies explainable artificial intelligence by utilizing extra trees and shapley additive explanations values. Also, binary breadth-first search algorithm is used to select the most important features for the proposed explainable artificial intelligence-chronic kidney disease model. …”
    Get full text
    Article
  5. 25

    Explainable Machine Learning Models for Colorectal Cancer Prediction Using Clinical Laboratory Data by Rui Li MS, Xiaoyan Hao MS, Yanjun Diao MD, Liu Yang MS, Jiayun Liu MD

    Published 2025-04-01
    “…Incorporating stool miR-92a detection into the model further improved diagnostic performance. Shapley additive explanations (SHAP) plots indicated that FOBT, CEA, lymphocyte percentage (LYMPH%), and hematocrit (HCT) were the most significant features contributing to CRC diagnosis. …”
    Get full text
    Article
  6. 26

    Interpretable Prediction of a Decentralized Smart Grid Based on Machine Learning and Explainable Artificial Intelligence by Ahmet Cifci

    Published 2025-01-01
    “…Models were evaluated using various metrics, and XAI methods, specifically SHapley Additive exPlanations (SHAP) and Individual Conditional Expectation (ICE) plots, were employed to enhance the interpretability of the models. …”
    Get full text
    Article
  7. 27

    Machine learning for detection of diffusion abnormalities-related respiratory changes among normal, overweight, and obese individuals based on BMI and pulmonary ventilation paramet... by Xin-Yue Song, Xin-Peng Xie, Wen-Jing Xu, Yu-Jia Cao, Bin-Miao Liang

    Published 2025-07-01
    “…Additionally, we performed feature importance analysis using shapley additive explanations (SHAP) and permutation importance to evaluate the contribution of individual parameters to the classification process. …”
    Get full text
    Article
  8. 28

    AI-driven data fusion modeling for enhanced prediction of mixed-mode I/III fracture toughness by Anantaya Timtong, Atthaphon Ariyarit, Wanwanut Boongsood, Prasert Aengchuan, Attasit Wiangkham

    Published 2024-12-01
    “…Additionally, a feature importance analysis using Shapley Additive exPlanations values reveals that the mode mixity parameter, specimen thickness, and radius are critical factors influencing fracture toughness. …”
    Get full text
    Article
  9. 29

    TPE-LCE-SHAP: A Hybrid Framework for Assessing Vehicle-Related PM2.5 Concentrations by Hamad Almujibah, Abdulrazak H. Almaliki, Caroline Mongina Matara, Adil Abdallah Mohammed Elhassan, Khalaf Alla Adam Mohamed, Mudthir Bakri, Afaq Khattak

    Published 2024-01-01
    “…The framework integrates the Local Cascade Ensemble (LCE) model, optimized using the Tree-structured Parzen Estimator (TPE) strategy, with SHapley Additive exPlanations (SHAP) to enhance interpretability. …”
    Get full text
    Article
  10. 30

    Knowledge Extraction via Machine Learning Guides a Topology‐Based Permeability Prediction Model by Jia Zhang, Gang Ma, Zhibing Yang, Jiangzhou Mei, Daren Zhang, Wei Zhou, Xiaolin Chang

    Published 2024-07-01
    “…Using the SHapley Additive exPlanations (SHAP) value, the influence of each feature on permeability prediction is quantified. …”
    Get full text
    Article
  11. 31

    An Interpretable Method for Asphalt Pavement Skid Resistance Performance Evaluation Under Sand-Accumulated Conditions Based on Multi-Scale Fractals by Yuhan Weng, Zhaoyun Sun, Huiying Liu, Yingbin Gu

    Published 2025-05-01
    “…The performance of mainstream machine learning models is compared, and the eXtreme Gradient Boosting (XGBoost) model is optimized using the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) algorithm. The SHapley Additive exPlanations (SHAP) method is used to analyze the optimal model’s interpretability. …”
    Get full text
    Article
  12. 32

    Developing a cost-effective tool for choke flow rate prediction in sub-critical oil wells using wellhead data by Zhiwei Xun, Farag M. A. Altalbawy, Prakash Kanjariya, R. Manjunatha, Debasish Shit, M. Nirmala, Ajay Sharma, Sarbeswara Hota, Shirin Shomurotova, Fadhil Faez Sead, Hojjat Abbasi, Mohammad Mahtab Alam

    Published 2025-07-01
    “…Gradient boosting machine (GBM) models were optimized using advanced algorithms like self-adaptive differential evolution (SADE), evolution strategy (ES), Bayesian probability improvement (BPI), and Batch Bayesian optimization (BBO). …”
    Get full text
    Article
  13. 33

    Predicting cognitive decline in cognitively impaired patients with ischemic stroke with high risk of cerebral hemorrhage: a machine learning approach by Eun Namgung, Young Sun Kim, Sun U. Kwon, Dong-Wha Kang, Dong-Wha Kang

    Published 2025-07-01
    “…Four machine learning algorithms were trained, Categorical Boosting (CatBoost), Adaptive Boosting (AdaBoost), eXtreme Gradient Boosting (XGBoost), and logistic regression, to predict cognitive decliners, defined as a decline of ≥3 K-MMSE points over 9 months, and ranked variable importance using the SHapley Additive exPlanations methodology.ResultsCatBoost outperformed the other models in classifying cognitive decliners within 9 months. …”
    Get full text
    Article
  14. 34

    Development and validation of an explainable machine learning prediction model of hemorrhagic transformation after intravenous thrombolysis in stroke by Yanan Lin, Yan Li, Yayin Luo, Jie Han

    Published 2025-01-01
    “…We utilized the Random Forest (RF), Multilayer Perceptron (MLP), Adaptive Boosting (AdaBoost), and Gaussian Naive Bayes (GauNB) algorithms to develop ML-HT models. …”
    Get full text
    Article
  15. 35

    Machine learning-based academic performance prediction with explainability for enhanced decision-making in educational institutions by Wesam Ahmed, Mudasir Ahmad Wani, Pawel Plawiak, Souham Meshoul, Amena Mahmoud, Mohamed Hammad

    Published 2025-07-01
    “…The local interpretable model-agnostic explanations (LIME) and SHapley Additive exPlanations (SHAP) are then used to explain the predictions produced by the proposed ensemble VR model. …”
    Get full text
    Article
  16. 36

    Exploring the Applications of Explainability in Wearable Data Analytics: Systematic Literature Review by Yasmin Abdelaal, Michaël Aupetit, Abdelkader Baggag, Dena Al-Thani

    Published 2024-12-01
    “…Post hoc methods such as Shapley Additive Explanations have gained traction for their adaptability in explaining complex algorithms visually. …”
    Get full text
    Article
  17. 37

    Root-Zone Salinity in Irrigated Arid Farmland: Revealing Driving Mechanisms of Dynamic Changes in China’s Manas River Basin over 20 Years by Guang Yang, Xuejin Qiao, Qiang Zuo, Jianchu Shi, Xun Wu, Alon Ben-Gal

    Published 2024-11-01
    “…The driving mechanisms behind root-zone <i>SSC</i> distributions were analyzed using an approach combined with two machine learning algorithms, eXtreme Gradient Boosting (XGBoost) and SHapley Additive exPlanation (SHAP), to identify influential factors and quantify their impacts. …”
    Get full text
    Article
  18. 38

    Interpretable prediction model for hand-foot-and-mouth disease incidence based on improved LSTM and XGBoost by Xiao LI, Shuyu HE, Yan PENG, Rongxin YANG, Lu TAO, Tingqi LOU, Wenqi HE

    Published 2025-07-01
    “…In order to address the issues of low accuracy and poor interpretability in existing HFMD incidence prediction models, in this paper, we propose an interpretable prediction model, namely, ARIMA–LSTM–XGBoost, which integrates multiple meteorological factors with Autoregressive integrated moving average model (ARIMA), Long short-term memory (LSTM), Extreme gradient boosting (XGBoost), Grey wolf optimizer (GWO), Genetic algorithm (GA) and Shapley additive explanations (SHAP). …”
    Get full text
    Article
  19. 39
  20. 40

    A Novel Ensemble of Deep Learning Approach for Cybersecurity Intrusion Detection with Explainable Artificial Intelligence by Abdullah Alabdulatif

    Published 2025-07-01
    “…Recursive Feature Elimination is utilized for optimal feature selection, while SHapley Additive exPlanations (SHAP) provide both global and local interpretability of the model’s decisions. …”
    Get full text
    Article