Effectiveness of Explainable Artificial Intelligence (XAI) Techniques for Improving Human Trust in Machine Learning Models: A Systematic Literature Review

Most decision-making processes worldwide are increasingly relying on artificial intelligence (AI) algorithms to enhance human welfare. Explainable Artificial Intelligence (XAI) techniques are pivotal in addressing the bottlenecks of utilizing machine learning (ML) algorithms, aiming to enhance human...

Full description

Saved in:
Bibliographic Details
Main Authors: In-On Wiratsin, Chaiyong Ragkhitwetsagul
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11017606/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849424754606342144
author In-On Wiratsin
Chaiyong Ragkhitwetsagul
author_facet In-On Wiratsin
Chaiyong Ragkhitwetsagul
author_sort In-On Wiratsin
collection DOAJ
description Most decision-making processes worldwide are increasingly relying on artificial intelligence (AI) algorithms to enhance human welfare. Explainable Artificial Intelligence (XAI) techniques are pivotal in addressing the bottlenecks of utilizing machine learning (ML) algorithms, aiming to enhance human trust by providing transparency and interpretability. This paper conducts a systematic literature review (SLR) to evaluate the effectiveness of various XAI techniques in improving human trust in ML models. Our methodology involves a comprehensive search and analysis of relevant literature from 2015 to 2024, using well-known databases and adhering to PRISMA guidelines. The results indicate that XAI techniques significantly enhance user confidence by making ML models more transparent and understandable, facilitating error identification, and promoting better decision-making. However, gaps remain, including the need for standardized evaluation metrics, more user-centric evaluations, and studies on the long-term impact of XAI on user trust. Future research should focus on these areas to further improve the applicability and effectiveness of XAI techniques in diverse domains.
format Article
id doaj-art-8d69340786124ad18dee43022f72fec2
institution Kabale University
issn 2169-3536
language English
publishDate 2025-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-8d69340786124ad18dee43022f72fec22025-08-20T03:30:02ZengIEEEIEEE Access2169-35362025-01-011312132612135010.1109/ACCESS.2025.357502211017606Effectiveness of Explainable Artificial Intelligence (XAI) Techniques for Improving Human Trust in Machine Learning Models: A Systematic Literature ReviewIn-On Wiratsin0https://orcid.org/0009-0000-4122-1617Chaiyong Ragkhitwetsagul1Faculty of Information and Communication Technology (ICT), Mahidol University, Salaya, Phutthamonthon, Nakhon Pathom, ThailandFaculty of Information and Communication Technology (ICT), Mahidol University, Salaya, Phutthamonthon, Nakhon Pathom, ThailandMost decision-making processes worldwide are increasingly relying on artificial intelligence (AI) algorithms to enhance human welfare. Explainable Artificial Intelligence (XAI) techniques are pivotal in addressing the bottlenecks of utilizing machine learning (ML) algorithms, aiming to enhance human trust by providing transparency and interpretability. This paper conducts a systematic literature review (SLR) to evaluate the effectiveness of various XAI techniques in improving human trust in ML models. Our methodology involves a comprehensive search and analysis of relevant literature from 2015 to 2024, using well-known databases and adhering to PRISMA guidelines. The results indicate that XAI techniques significantly enhance user confidence by making ML models more transparent and understandable, facilitating error identification, and promoting better decision-making. However, gaps remain, including the need for standardized evaluation metrics, more user-centric evaluations, and studies on the long-term impact of XAI on user trust. Future research should focus on these areas to further improve the applicability and effectiveness of XAI techniques in diverse domains.https://ieeexplore.ieee.org/document/11017606/Explainable AIXAI assessmentinterpretable AImachine learninghuman trust
spellingShingle In-On Wiratsin
Chaiyong Ragkhitwetsagul
Effectiveness of Explainable Artificial Intelligence (XAI) Techniques for Improving Human Trust in Machine Learning Models: A Systematic Literature Review
IEEE Access
Explainable AI
XAI assessment
interpretable AI
machine learning
human trust
title Effectiveness of Explainable Artificial Intelligence (XAI) Techniques for Improving Human Trust in Machine Learning Models: A Systematic Literature Review
title_full Effectiveness of Explainable Artificial Intelligence (XAI) Techniques for Improving Human Trust in Machine Learning Models: A Systematic Literature Review
title_fullStr Effectiveness of Explainable Artificial Intelligence (XAI) Techniques for Improving Human Trust in Machine Learning Models: A Systematic Literature Review
title_full_unstemmed Effectiveness of Explainable Artificial Intelligence (XAI) Techniques for Improving Human Trust in Machine Learning Models: A Systematic Literature Review
title_short Effectiveness of Explainable Artificial Intelligence (XAI) Techniques for Improving Human Trust in Machine Learning Models: A Systematic Literature Review
title_sort effectiveness of explainable artificial intelligence xai techniques for improving human trust in machine learning models a systematic literature review
topic Explainable AI
XAI assessment
interpretable AI
machine learning
human trust
url https://ieeexplore.ieee.org/document/11017606/
work_keys_str_mv AT inonwiratsin effectivenessofexplainableartificialintelligencexaitechniquesforimprovinghumantrustinmachinelearningmodelsasystematicliteraturereview
AT chaiyongragkhitwetsagul effectivenessofexplainableartificialintelligencexaitechniquesforimprovinghumantrustinmachinelearningmodelsasystematicliteraturereview