Commentary: Trustworthy and ethical AI in digital mental healthcare – wishful thinking or tangible goal?

The use of AI in digital mental healthcare promises to make treatments more effective, accessible, and scalable than ever before. At the same time, the use of AI opens a myriad of ethical concerns, including the lack of transparency, the risk of bias leading to increasing social inequalities, and th...

Full description

Saved in:
Bibliographic Details
Main Authors: Ellen Svensson, Walter Osika, Per Carlbring
Format: Article
Language:English
Published: Elsevier 2025-09-01
Series:Internet Interventions
Online Access:http://www.sciencedirect.com/science/article/pii/S2214782925000454
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849222582757228544
author Ellen Svensson
Walter Osika
Per Carlbring
author_facet Ellen Svensson
Walter Osika
Per Carlbring
author_sort Ellen Svensson
collection DOAJ
description The use of AI in digital mental healthcare promises to make treatments more effective, accessible, and scalable than ever before. At the same time, the use of AI opens a myriad of ethical concerns, including the lack of transparency, the risk of bias leading to increasing social inequalities, and the risk of responsibility gaps. This raises a crucial question: Can we rely on these systems to deliver care that is both ethical and effective? In attempts to regulate and ensure the safe usage of AI-powered tools, calls to trustworthy AI systems have become central. However, the use of terms such as “trust” and “trustworthiness” risks increasing anthropomorphization of AI systems, attaching human moral activities, such as trust, to artificial systems. In this article, we propose that terms such as “trustworthiness” be used with caution regarding AI and that when used, they should reflect an AI system's ability to consistently demonstrate measurable adherence to ethical principles, such as respect for human autonomy, nonmaleficence, fairness, and transparency. On this approach, trustworthy and ethical AI has the possibility of becoming a tangible goal rather than wishful thinking.
format Article
id doaj-art-b527e59b4825437e8882d61378c2f0f0
institution Kabale University
issn 2214-7829
language English
publishDate 2025-09-01
publisher Elsevier
record_format Article
series Internet Interventions
spelling doaj-art-b527e59b4825437e8882d61378c2f0f02025-08-26T04:14:18ZengElsevierInternet Interventions2214-78292025-09-014110084410.1016/j.invent.2025.100844Commentary: Trustworthy and ethical AI in digital mental healthcare – wishful thinking or tangible goal?Ellen Svensson0Walter Osika1Per Carlbring2Department of Philosophy, Stockholm University, Stockholm, Sweden; TEA Lab (Trustworthy and Ethical AI Lab), Center for Social Sustainability, Department of Neurobiology, Care Science and Society, Karolinska Institutet, Stockholm, Sweden; Corresponding author at: Department of Philosophy, Stockholm University, Stockholm, Sweden.TEA Lab (Trustworthy and Ethical AI Lab), Center for Social Sustainability, Department of Neurobiology, Care Science and Society, Karolinska Institutet, Stockholm, Sweden; Stockholm Health Care Services, Southern Stockholm Psychiatry District, Region Stockholm, Stockholm, SwedenTEA Lab (Trustworthy and Ethical AI Lab), Center for Social Sustainability, Department of Neurobiology, Care Science and Society, Karolinska Institutet, Stockholm, Sweden; Department of Psychology, Stockholm University, Stockholm, Sweden; School of Psychology, Korea University, Seoul, South KoreaThe use of AI in digital mental healthcare promises to make treatments more effective, accessible, and scalable than ever before. At the same time, the use of AI opens a myriad of ethical concerns, including the lack of transparency, the risk of bias leading to increasing social inequalities, and the risk of responsibility gaps. This raises a crucial question: Can we rely on these systems to deliver care that is both ethical and effective? In attempts to regulate and ensure the safe usage of AI-powered tools, calls to trustworthy AI systems have become central. However, the use of terms such as “trust” and “trustworthiness” risks increasing anthropomorphization of AI systems, attaching human moral activities, such as trust, to artificial systems. In this article, we propose that terms such as “trustworthiness” be used with caution regarding AI and that when used, they should reflect an AI system's ability to consistently demonstrate measurable adherence to ethical principles, such as respect for human autonomy, nonmaleficence, fairness, and transparency. On this approach, trustworthy and ethical AI has the possibility of becoming a tangible goal rather than wishful thinking.http://www.sciencedirect.com/science/article/pii/S2214782925000454
spellingShingle Ellen Svensson
Walter Osika
Per Carlbring
Commentary: Trustworthy and ethical AI in digital mental healthcare – wishful thinking or tangible goal?
Internet Interventions
title Commentary: Trustworthy and ethical AI in digital mental healthcare – wishful thinking or tangible goal?
title_full Commentary: Trustworthy and ethical AI in digital mental healthcare – wishful thinking or tangible goal?
title_fullStr Commentary: Trustworthy and ethical AI in digital mental healthcare – wishful thinking or tangible goal?
title_full_unstemmed Commentary: Trustworthy and ethical AI in digital mental healthcare – wishful thinking or tangible goal?
title_short Commentary: Trustworthy and ethical AI in digital mental healthcare – wishful thinking or tangible goal?
title_sort commentary trustworthy and ethical ai in digital mental healthcare wishful thinking or tangible goal
url http://www.sciencedirect.com/science/article/pii/S2214782925000454
work_keys_str_mv AT ellensvensson commentarytrustworthyandethicalaiindigitalmentalhealthcarewishfulthinkingortangiblegoal
AT walterosika commentarytrustworthyandethicalaiindigitalmentalhealthcarewishfulthinkingortangiblegoal
AT percarlbring commentarytrustworthyandethicalaiindigitalmentalhealthcarewishfulthinkingortangiblegoal