Effect of Explainable Artificial Intelligence on Trust of Mental Health Professionals in an AI-Based System for Suicide Prevention

Artificial Intelligence (AI)-based systems have been proposed to aid Mental Health Professionals (MHPs) in various tasks, including the prevention of suicide by identifying Suicidal Ideation (SI). However, these systems may lack transparency and thereby create mistrust among MHPs. Explainable Artifi...

Full description

Saved in:
Bibliographic Details
Main Authors: Adonias Caetano de Oliveira, Joao Pedro Cavalcanti Azevedo, Livia Ruback, Rayele Moreira, Silmar Silva Teixeira, Ariel Soares Teles
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10945851/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Artificial Intelligence (AI)-based systems have been proposed to aid Mental Health Professionals (MHPs) in various tasks, including the prevention of suicide by identifying Suicidal Ideation (SI). However, these systems may lack transparency and thereby create mistrust among MHPs. Explainable Artificial Intelligence (XAI) methods can elucidate how features influence system predictions, aiding MHPs in understanding them. This exploratory study aims to investigate how MHPs’ trust is influenced by AI explanations (educational intervention and XAI methods) and other factors (professional background, knowledge of AI and computing, and reported system misclassification). We conducted an experiment using Boamente, an AI-powered clinical decision support system designed to assist MHPs in suicide prevention. Boamente identifies SI in Brazilian Portuguese texts typed on smartphones by leveraging a Large Language Model (LLM) for analysis. The results demonstrate that professional background, knowledge of AI and computing, and educational intervention had no statistically significant effect on trust. In contrast, trust was affected by factors such as LLM prediction explanations, the quality of explanations, and reported misclassification. Therefore, providing prediction explanations to understand the inner workings of AI models led MHPs to be more critical in relation to predictions, while there was an overtrust on MHPs when no explanations were provided. Furthermore, disagreement with LLM classifications and perceptions of system vulnerabilities also affected trust.
ISSN:2169-3536