Effectiveness of Explainable Artificial Intelligence (XAI) Techniques for Improving Human Trust in Machine Learning Models: A Systematic Literature Review

Most decision-making processes worldwide are increasingly relying on artificial intelligence (AI) algorithms to enhance human welfare. Explainable Artificial Intelligence (XAI) techniques are pivotal in addressing the bottlenecks of utilizing machine learning (ML) algorithms, aiming to enhance human...

Full description

Saved in:
Bibliographic Details
Main Authors: In-On Wiratsin, Chaiyong Ragkhitwetsagul
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11017606/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Most decision-making processes worldwide are increasingly relying on artificial intelligence (AI) algorithms to enhance human welfare. Explainable Artificial Intelligence (XAI) techniques are pivotal in addressing the bottlenecks of utilizing machine learning (ML) algorithms, aiming to enhance human trust by providing transparency and interpretability. This paper conducts a systematic literature review (SLR) to evaluate the effectiveness of various XAI techniques in improving human trust in ML models. Our methodology involves a comprehensive search and analysis of relevant literature from 2015 to 2024, using well-known databases and adhering to PRISMA guidelines. The results indicate that XAI techniques significantly enhance user confidence by making ML models more transparent and understandable, facilitating error identification, and promoting better decision-making. However, gaps remain, including the need for standardized evaluation metrics, more user-centric evaluations, and studies on the long-term impact of XAI on user trust. Future research should focus on these areas to further improve the applicability and effectiveness of XAI techniques in diverse domains.
ISSN:2169-3536