Exploration and practice of human-machine trustworthy mechanism in XAI

Artificial Intelligence (AI) has made significant progress in various industries, but the black-box issue, potential risks, and the resulting user trust crisis have limited its further promotion and application. This paper addresses the trust issues in AI by proposing a general U-XAI (unified-trustw...

Full description

Saved in:
Bibliographic Details
Main Authors: LUO Zhongyan, XIA Zhengxun, TANG Jianfei, YANG Yifan, YANG Hongshan, LI Haohua, ZHANG Yan
Format: Article
Language:zho
Published: China InfoCom Media Group 2025-01-01
Series:大数据
Subjects:
Online Access:http://www.j-bigdataresearch.com.cn/thesisDetails?columnId=109252966&Fpath=home&index=0
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Artificial Intelligence (AI) has made significant progress in various industries, but the black-box issue, potential risks, and the resulting user trust crisis have limited its further promotion and application. This paper addresses the trust issues in AI by proposing a general U-XAI (unified-trustworthy XAI) human-computer trust mechanism and technical framework, aiming to solve the eight types of trust challenges brought by the "social-technical gap" in the AI field. The framework includes four modules: trust chain governance, integrity governance, understandability governance, and acceptability governance. Combining theoretical models and technical practices, it provides a comprehensive solution for building trustworthy AI. Through the U-XAI framework proposed in this paper, the credibility of AI systems can be effectively enhanced, promoting their application and development in various social fields, and providing a practical basis and reference for the trustworthy governance of AI.
ISSN:2096-0271