Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review
BackgroundConversational artificial intelligence (CAI) is emerging as a promising digital technology for mental health care. CAI apps, such as psychotherapeutic chatbots, are available in app stores, but their use raises ethical concerns. ObjectiveWe aimed to prov...
Saved in:
| Main Authors: | , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
JMIR Publications
2025-02-01
|
| Series: | JMIR Mental Health |
| Online Access: | https://mental.jmir.org/2025/1/e60432 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850023795446775808 |
|---|---|
| author | Mehrdad Rahsepar Meadi Tomas Sillekens Suzanne Metselaar Anton van Balkom Justin Bernstein Neeltje Batelaan |
| author_facet | Mehrdad Rahsepar Meadi Tomas Sillekens Suzanne Metselaar Anton van Balkom Justin Bernstein Neeltje Batelaan |
| author_sort | Mehrdad Rahsepar Meadi |
| collection | DOAJ |
| description |
BackgroundConversational artificial intelligence (CAI) is emerging as a promising digital technology for mental health care. CAI apps, such as psychotherapeutic chatbots, are available in app stores, but their use raises ethical concerns.
ObjectiveWe aimed to provide a comprehensive overview of ethical considerations surrounding CAI as a therapist for individuals with mental health issues.
MethodsWe conducted a systematic search across PubMed, Embase, APA PsycINFO, Web of Science, Scopus, the Philosopher’s Index, and ACM Digital Library databases. Our search comprised 3 elements: embodied artificial intelligence, ethics, and mental health. We defined CAI as a conversational agent that interacts with a person and uses artificial intelligence to formulate output. We included articles discussing the ethical challenges of CAI functioning in the role of a therapist for individuals with mental health issues. We added additional articles through snowball searching. We included articles in English or Dutch. All types of articles were considered except abstracts of symposia. Screening for eligibility was done by 2 independent researchers (MRM and TS or AvB). An initial charting form was created based on the expected considerations and revised and complemented during the charting process. The ethical challenges were divided into themes. When a concern occurred in more than 2 articles, we identified it as a distinct theme.
ResultsWe included 101 articles, of which 95% (n=96) were published in 2018 or later. Most were reviews (n=22, 21.8%) followed by commentaries (n=17, 16.8%). The following 10 themes were distinguished: (1) safety and harm (discussed in 52/101, 51.5% of articles); the most common topics within this theme were suicidality and crisis management, harmful or wrong suggestions, and the risk of dependency on CAI; (2) explicability, transparency, and trust (n=26, 25.7%), including topics such as the effects of “black box” algorithms on trust; (3) responsibility and accountability (n=31, 30.7%); (4) empathy and humanness (n=29, 28.7%); (5) justice (n=41, 40.6%), including themes such as health inequalities due to differences in digital literacy; (6) anthropomorphization and deception (n=24, 23.8%); (7) autonomy (n=12, 11.9%); (8) effectiveness (n=38, 37.6%); (9) privacy and confidentiality (n=62, 61.4%); and (10) concerns for health care workers’ jobs (n=16, 15.8%). Other themes were discussed in 9.9% (n=10) of the identified articles.
ConclusionsOur scoping review has comprehensively covered ethical aspects of CAI in mental health care. While certain themes remain underexplored and stakeholders’ perspectives are insufficiently represented, this study highlights critical areas for further research. These include evaluating the risks and benefits of CAI in comparison to human therapists, determining its appropriate roles in therapeutic contexts and its impact on care access, and addressing accountability. Addressing these gaps can inform normative analysis and guide the development of ethical guidelines for responsible CAI use in mental health care. |
| format | Article |
| id | doaj-art-8e4dc9d01e6346b99700e90aa1d84e1d |
| institution | DOAJ |
| issn | 2368-7959 |
| language | English |
| publishDate | 2025-02-01 |
| publisher | JMIR Publications |
| record_format | Article |
| series | JMIR Mental Health |
| spelling | doaj-art-8e4dc9d01e6346b99700e90aa1d84e1d2025-08-20T03:01:17ZengJMIR PublicationsJMIR Mental Health2368-79592025-02-0112e6043210.2196/60432Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping ReviewMehrdad Rahsepar Meadihttps://orcid.org/0000-0001-5637-3349Tomas Sillekenshttps://orcid.org/0009-0005-5928-9333Suzanne Metselaarhttps://orcid.org/0000-0002-8655-7082Anton van Balkomhttps://orcid.org/0000-0001-9171-0208Justin Bernsteinhttps://orcid.org/0000-0003-4837-5832Neeltje Batelaanhttps://orcid.org/0000-0001-6444-3781 BackgroundConversational artificial intelligence (CAI) is emerging as a promising digital technology for mental health care. CAI apps, such as psychotherapeutic chatbots, are available in app stores, but their use raises ethical concerns. ObjectiveWe aimed to provide a comprehensive overview of ethical considerations surrounding CAI as a therapist for individuals with mental health issues. MethodsWe conducted a systematic search across PubMed, Embase, APA PsycINFO, Web of Science, Scopus, the Philosopher’s Index, and ACM Digital Library databases. Our search comprised 3 elements: embodied artificial intelligence, ethics, and mental health. We defined CAI as a conversational agent that interacts with a person and uses artificial intelligence to formulate output. We included articles discussing the ethical challenges of CAI functioning in the role of a therapist for individuals with mental health issues. We added additional articles through snowball searching. We included articles in English or Dutch. All types of articles were considered except abstracts of symposia. Screening for eligibility was done by 2 independent researchers (MRM and TS or AvB). An initial charting form was created based on the expected considerations and revised and complemented during the charting process. The ethical challenges were divided into themes. When a concern occurred in more than 2 articles, we identified it as a distinct theme. ResultsWe included 101 articles, of which 95% (n=96) were published in 2018 or later. Most were reviews (n=22, 21.8%) followed by commentaries (n=17, 16.8%). The following 10 themes were distinguished: (1) safety and harm (discussed in 52/101, 51.5% of articles); the most common topics within this theme were suicidality and crisis management, harmful or wrong suggestions, and the risk of dependency on CAI; (2) explicability, transparency, and trust (n=26, 25.7%), including topics such as the effects of “black box” algorithms on trust; (3) responsibility and accountability (n=31, 30.7%); (4) empathy and humanness (n=29, 28.7%); (5) justice (n=41, 40.6%), including themes such as health inequalities due to differences in digital literacy; (6) anthropomorphization and deception (n=24, 23.8%); (7) autonomy (n=12, 11.9%); (8) effectiveness (n=38, 37.6%); (9) privacy and confidentiality (n=62, 61.4%); and (10) concerns for health care workers’ jobs (n=16, 15.8%). Other themes were discussed in 9.9% (n=10) of the identified articles. ConclusionsOur scoping review has comprehensively covered ethical aspects of CAI in mental health care. While certain themes remain underexplored and stakeholders’ perspectives are insufficiently represented, this study highlights critical areas for further research. These include evaluating the risks and benefits of CAI in comparison to human therapists, determining its appropriate roles in therapeutic contexts and its impact on care access, and addressing accountability. Addressing these gaps can inform normative analysis and guide the development of ethical guidelines for responsible CAI use in mental health care.https://mental.jmir.org/2025/1/e60432 |
| spellingShingle | Mehrdad Rahsepar Meadi Tomas Sillekens Suzanne Metselaar Anton van Balkom Justin Bernstein Neeltje Batelaan Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review JMIR Mental Health |
| title | Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review |
| title_full | Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review |
| title_fullStr | Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review |
| title_full_unstemmed | Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review |
| title_short | Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review |
| title_sort | exploring the ethical challenges of conversational ai in mental health care scoping review |
| url | https://mental.jmir.org/2025/1/e60432 |
| work_keys_str_mv | AT mehrdadrahseparmeadi exploringtheethicalchallengesofconversationalaiinmentalhealthcarescopingreview AT tomassillekens exploringtheethicalchallengesofconversationalaiinmentalhealthcarescopingreview AT suzannemetselaar exploringtheethicalchallengesofconversationalaiinmentalhealthcarescopingreview AT antonvanbalkom exploringtheethicalchallengesofconversationalaiinmentalhealthcarescopingreview AT justinbernstein exploringtheethicalchallengesofconversationalaiinmentalhealthcarescopingreview AT neeltjebatelaan exploringtheethicalchallengesofconversationalaiinmentalhealthcarescopingreview |