Artificial intelligence vs. human expert: Licensed mental health clinicians' blinded evaluation of AI-generated and expert psychological advice on quality, empathy, and perceived authorship

Background: The use of artificial intelligence for psychological advice shows promise for enhancing accessibility and reducing costs, but it remains unclear whether AI-generated advice can match the quality and empathy of experts. Method: In a blinded, comparative cross-sectional design, licensed ps...

Full description

Saved in:
Bibliographic Details
Main Authors: Ludwig Franke Föyen, Emma Zapel, Mats Lekander, Erik Hedman-Lagerlöf, Elin Lindsäter
Format: Article
Language:English
Published: Elsevier 2025-09-01
Series:Internet Interventions
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2214782925000429
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849222590421270528
author Ludwig Franke Föyen
Emma Zapel
Mats Lekander
Erik Hedman-Lagerlöf
Elin Lindsäter
author_facet Ludwig Franke Föyen
Emma Zapel
Mats Lekander
Erik Hedman-Lagerlöf
Elin Lindsäter
author_sort Ludwig Franke Föyen
collection DOAJ
description Background: The use of artificial intelligence for psychological advice shows promise for enhancing accessibility and reducing costs, but it remains unclear whether AI-generated advice can match the quality and empathy of experts. Method: In a blinded, comparative cross-sectional design, licensed psychologists and psychotherapists assessed the quality, empathy, and authorship of psychological advice, which was either AI-generated or authored by experts. Results: AI-generated responses were rated significantly more favorable for emotional (OR = 1.79, 95 % CI [1.1, 2.93], p = .02) and motivational empathy (OR = 1.84, 95 % CI [1.12, 3.04], p = .02). Ratings for scientific quality (p = .10) and cognitive empathy (p = .08) were comparable to expert advice. Participants could not distinguish between AI- and expert-authored advice (p = .27), but perceived expert authorship was associated with more favorable ratings across these measures (ORs for perceived AI vs. perceived expert ranging from 0.03 to 0.15, all p < .001). For overall preference, AI-authored advice was favored when assessed blindly based on its actual source (β = 6.96, p = .002). Nevertheless, advice perceived as expert-authored was also strongly preferred (β = 6.26, p = .001), with 93.55 % of participants preferring the advice they believed came from an expert, irrespective of its true origin. Conclusions: AI demonstrates potential to match expert performance in asynchronous written psychological advice, but biases favoring perceived expert authorship may hinder its broader acceptance. Mitigating these biases and evaluating AI's trustworthiness and empathy are important next steps for safe and effective integration of AI in clinical practice.
format Article
id doaj-art-e0456df86e854653b2add731c2d7bed1
institution Kabale University
issn 2214-7829
language English
publishDate 2025-09-01
publisher Elsevier
record_format Article
series Internet Interventions
spelling doaj-art-e0456df86e854653b2add731c2d7bed12025-08-26T04:14:17ZengElsevierInternet Interventions2214-78292025-09-014110084110.1016/j.invent.2025.100841Artificial intelligence vs. human expert: Licensed mental health clinicians' blinded evaluation of AI-generated and expert psychological advice on quality, empathy, and perceived authorshipLudwig Franke Föyen0Emma Zapel1Mats Lekander2Erik Hedman-Lagerlöf3Elin Lindsäter4Division of Psychology, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden; Stress Research Institute, Department of Psychology, Stockholm University, Stockholm, Sweden; Gustavsberg University Primary Care Center, Stockholm Health Care Services, Region Stockholm, Sweden; Osher Center for Integrative Health, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, SwedenDivision of Psychology, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden; Division of Clinical Psychology, Department of Psychology, Uppsala University, Uppsala, Sweden; Corresponding author at: Department of Clinical Neuroscience, Karolinska Institutet, Nobels Väg 9, Solna, Sweden.Division of Psychology, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden; Stress Research Institute, Department of Psychology, Stockholm University, Stockholm, Sweden; Osher Center for Integrative Health, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, SwedenDivision of Psychology, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden; Gustavsberg University Primary Care Center, Stockholm Health Care Services, Region Stockholm, Sweden; Osher Center for Integrative Health, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, SwedenDivision of Psychology, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden; Gustavsberg University Primary Care Center, Stockholm Health Care Services, Region Stockholm, Sweden; Osher Center for Integrative Health, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden; Center for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm Health Care Services, Region Stockholm, SwedenBackground: The use of artificial intelligence for psychological advice shows promise for enhancing accessibility and reducing costs, but it remains unclear whether AI-generated advice can match the quality and empathy of experts. Method: In a blinded, comparative cross-sectional design, licensed psychologists and psychotherapists assessed the quality, empathy, and authorship of psychological advice, which was either AI-generated or authored by experts. Results: AI-generated responses were rated significantly more favorable for emotional (OR = 1.79, 95 % CI [1.1, 2.93], p = .02) and motivational empathy (OR = 1.84, 95 % CI [1.12, 3.04], p = .02). Ratings for scientific quality (p = .10) and cognitive empathy (p = .08) were comparable to expert advice. Participants could not distinguish between AI- and expert-authored advice (p = .27), but perceived expert authorship was associated with more favorable ratings across these measures (ORs for perceived AI vs. perceived expert ranging from 0.03 to 0.15, all p < .001). For overall preference, AI-authored advice was favored when assessed blindly based on its actual source (β = 6.96, p = .002). Nevertheless, advice perceived as expert-authored was also strongly preferred (β = 6.26, p = .001), with 93.55 % of participants preferring the advice they believed came from an expert, irrespective of its true origin. Conclusions: AI demonstrates potential to match expert performance in asynchronous written psychological advice, but biases favoring perceived expert authorship may hinder its broader acceptance. Mitigating these biases and evaluating AI's trustworthiness and empathy are important next steps for safe and effective integration of AI in clinical practice.http://www.sciencedirect.com/science/article/pii/S2214782925000429Artificial intelligenceDigital healthEmpathyMental healthPsychological adviceTherapeutic Alliance
spellingShingle Ludwig Franke Föyen
Emma Zapel
Mats Lekander
Erik Hedman-Lagerlöf
Elin Lindsäter
Artificial intelligence vs. human expert: Licensed mental health clinicians' blinded evaluation of AI-generated and expert psychological advice on quality, empathy, and perceived authorship
Internet Interventions
Artificial intelligence
Digital health
Empathy
Mental health
Psychological advice
Therapeutic Alliance
title Artificial intelligence vs. human expert: Licensed mental health clinicians' blinded evaluation of AI-generated and expert psychological advice on quality, empathy, and perceived authorship
title_full Artificial intelligence vs. human expert: Licensed mental health clinicians' blinded evaluation of AI-generated and expert psychological advice on quality, empathy, and perceived authorship
title_fullStr Artificial intelligence vs. human expert: Licensed mental health clinicians' blinded evaluation of AI-generated and expert psychological advice on quality, empathy, and perceived authorship
title_full_unstemmed Artificial intelligence vs. human expert: Licensed mental health clinicians' blinded evaluation of AI-generated and expert psychological advice on quality, empathy, and perceived authorship
title_short Artificial intelligence vs. human expert: Licensed mental health clinicians' blinded evaluation of AI-generated and expert psychological advice on quality, empathy, and perceived authorship
title_sort artificial intelligence vs human expert licensed mental health clinicians blinded evaluation of ai generated and expert psychological advice on quality empathy and perceived authorship
topic Artificial intelligence
Digital health
Empathy
Mental health
Psychological advice
Therapeutic Alliance
url http://www.sciencedirect.com/science/article/pii/S2214782925000429
work_keys_str_mv AT ludwigfrankefoyen artificialintelligencevshumanexpertlicensedmentalhealthcliniciansblindedevaluationofaigeneratedandexpertpsychologicaladviceonqualityempathyandperceivedauthorship
AT emmazapel artificialintelligencevshumanexpertlicensedmentalhealthcliniciansblindedevaluationofaigeneratedandexpertpsychologicaladviceonqualityempathyandperceivedauthorship
AT matslekander artificialintelligencevshumanexpertlicensedmentalhealthcliniciansblindedevaluationofaigeneratedandexpertpsychologicaladviceonqualityempathyandperceivedauthorship
AT erikhedmanlagerlof artificialintelligencevshumanexpertlicensedmentalhealthcliniciansblindedevaluationofaigeneratedandexpertpsychologicaladviceonqualityempathyandperceivedauthorship
AT elinlindsater artificialintelligencevshumanexpertlicensedmentalhealthcliniciansblindedevaluationofaigeneratedandexpertpsychologicaladviceonqualityempathyandperceivedauthorship