Exploring Biases of Large Language Models in the Field of Mental Health: Comparative Questionnaire Study of the Effect of Gender and Sexual Orientation in Anorexia Nervosa and Bulimia Nervosa Case Vignettes

Abstract BackgroundLarge language models (LLMs) are increasingly used in mental health, showing promise in assessing disorders. However, concerns exist regarding their accuracy, reliability, and fairness. Societal biases and underrepresentation of certain populations may impac...

Full description

Saved in:
Bibliographic Details
Main Authors: Rebekka Schnepper, Noa Roemmel, Rainer Schaefert, Lena Lambrecht-Walzinger, Gunther Meinlschmidt
Format: Article
Language:English
Published: JMIR Publications 2025-03-01
Series:JMIR Mental Health
Online Access:https://mental.jmir.org/2025/1/e57986
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850276239325003776
author Rebekka Schnepper
Noa Roemmel
Rainer Schaefert
Lena Lambrecht-Walzinger
Gunther Meinlschmidt
author_facet Rebekka Schnepper
Noa Roemmel
Rainer Schaefert
Lena Lambrecht-Walzinger
Gunther Meinlschmidt
author_sort Rebekka Schnepper
collection DOAJ
description Abstract BackgroundLarge language models (LLMs) are increasingly used in mental health, showing promise in assessing disorders. However, concerns exist regarding their accuracy, reliability, and fairness. Societal biases and underrepresentation of certain populations may impact LLMs. Because LLMs are already used for clinical practice, including decision support, it is important to investigate potential biases to ensure a responsible use of LLMs. Anorexia nervosa (AN) and bulimia nervosa (BN) show a lifetime prevalence of 1%‐2%, affecting more women than men. Among men, homosexual men face a higher risk of eating disorders (EDs) than heterosexual men. However, men are underrepresented in ED research, and studies on gender, sexual orientation, and their impact on AN and BN prevalence, symptoms, and treatment outcomes remain limited. ObjectivesWe aimed to estimate the presence and size of bias related to gender and sexual orientation produced by a common LLM as well as a smaller LLM specifically trained for mental health analyses, exemplified in the context of ED symptomatology and health-related quality of life (HRQoL) of patients with AN or BN. MethodsWe extracted 30 case vignettes (22 AN and 8 BN) from scientific papers. We adapted each vignette to create 4 versions, describing a female versus male patient living with their female versus male partner (2 × 2 design), yielding 120 vignettes. We then fed each vignette into ChatGPT-4 and to “MentaLLaMA” based on the Large Language Model Meta AI (LLaMA) architecture thrice with the instruction to evaluate them by providing responses to 2 psychometric instruments, the RAND-36 questionnaire assessing HRQoL and the eating disorder examination questionnaire. With the resulting LLM-generated scores, we calculated multilevel models with a random intercept for gender and sexual orientation (accounting for within-vignette variance), nested in vignettes (accounting for between-vignette variance). ResultsIn ChatGPT-4, the multilevel model with 360 observations indicated a significant association with gender for the RAND-36 mental composite summary (conditional means: 12.8 for male and 15.1 for female cases; 95% CI of the effect –6.15 to −0.35; PPPPPP ConclusionsLLM-generated mental HRQoL estimates for AN and BN case vignettes may be biased by gender, with male cases scoring lower despite no real-world evidence supporting this pattern. This highlights the risk of bias in generative artificial intelligence in the field of mental health. Understanding and mitigating biases related to gender and other factors, such as ethnicity, and socioeconomic status are crucial for responsible use in diagnostics and treatment recommendations.
format Article
id doaj-art-71efe6aa5f784b579682cd568ce0b3be
institution OA Journals
issn 2368-7959
language English
publishDate 2025-03-01
publisher JMIR Publications
record_format Article
series JMIR Mental Health
spelling doaj-art-71efe6aa5f784b579682cd568ce0b3be2025-08-20T01:50:22ZengJMIR PublicationsJMIR Mental Health2368-79592025-03-0112e57986e5798610.2196/57986Exploring Biases of Large Language Models in the Field of Mental Health: Comparative Questionnaire Study of the Effect of Gender and Sexual Orientation in Anorexia Nervosa and Bulimia Nervosa Case VignettesRebekka Schnepperhttp://orcid.org/0000-0002-5415-5943Noa Roemmelhttp://orcid.org/0000-0001-7118-7720Rainer Schaeferthttp://orcid.org/0000-0002-3077-7289Lena Lambrecht-Walzingerhttp://orcid.org/0009-0001-4517-5205Gunther Meinlschmidthttp://orcid.org/0000-0002-3488-193X Abstract BackgroundLarge language models (LLMs) are increasingly used in mental health, showing promise in assessing disorders. However, concerns exist regarding their accuracy, reliability, and fairness. Societal biases and underrepresentation of certain populations may impact LLMs. Because LLMs are already used for clinical practice, including decision support, it is important to investigate potential biases to ensure a responsible use of LLMs. Anorexia nervosa (AN) and bulimia nervosa (BN) show a lifetime prevalence of 1%‐2%, affecting more women than men. Among men, homosexual men face a higher risk of eating disorders (EDs) than heterosexual men. However, men are underrepresented in ED research, and studies on gender, sexual orientation, and their impact on AN and BN prevalence, symptoms, and treatment outcomes remain limited. ObjectivesWe aimed to estimate the presence and size of bias related to gender and sexual orientation produced by a common LLM as well as a smaller LLM specifically trained for mental health analyses, exemplified in the context of ED symptomatology and health-related quality of life (HRQoL) of patients with AN or BN. MethodsWe extracted 30 case vignettes (22 AN and 8 BN) from scientific papers. We adapted each vignette to create 4 versions, describing a female versus male patient living with their female versus male partner (2 × 2 design), yielding 120 vignettes. We then fed each vignette into ChatGPT-4 and to “MentaLLaMA” based on the Large Language Model Meta AI (LLaMA) architecture thrice with the instruction to evaluate them by providing responses to 2 psychometric instruments, the RAND-36 questionnaire assessing HRQoL and the eating disorder examination questionnaire. With the resulting LLM-generated scores, we calculated multilevel models with a random intercept for gender and sexual orientation (accounting for within-vignette variance), nested in vignettes (accounting for between-vignette variance). ResultsIn ChatGPT-4, the multilevel model with 360 observations indicated a significant association with gender for the RAND-36 mental composite summary (conditional means: 12.8 for male and 15.1 for female cases; 95% CI of the effect –6.15 to −0.35; PPPPPP ConclusionsLLM-generated mental HRQoL estimates for AN and BN case vignettes may be biased by gender, with male cases scoring lower despite no real-world evidence supporting this pattern. This highlights the risk of bias in generative artificial intelligence in the field of mental health. Understanding and mitigating biases related to gender and other factors, such as ethnicity, and socioeconomic status are crucial for responsible use in diagnostics and treatment recommendations.https://mental.jmir.org/2025/1/e57986
spellingShingle Rebekka Schnepper
Noa Roemmel
Rainer Schaefert
Lena Lambrecht-Walzinger
Gunther Meinlschmidt
Exploring Biases of Large Language Models in the Field of Mental Health: Comparative Questionnaire Study of the Effect of Gender and Sexual Orientation in Anorexia Nervosa and Bulimia Nervosa Case Vignettes
JMIR Mental Health
title Exploring Biases of Large Language Models in the Field of Mental Health: Comparative Questionnaire Study of the Effect of Gender and Sexual Orientation in Anorexia Nervosa and Bulimia Nervosa Case Vignettes
title_full Exploring Biases of Large Language Models in the Field of Mental Health: Comparative Questionnaire Study of the Effect of Gender and Sexual Orientation in Anorexia Nervosa and Bulimia Nervosa Case Vignettes
title_fullStr Exploring Biases of Large Language Models in the Field of Mental Health: Comparative Questionnaire Study of the Effect of Gender and Sexual Orientation in Anorexia Nervosa and Bulimia Nervosa Case Vignettes
title_full_unstemmed Exploring Biases of Large Language Models in the Field of Mental Health: Comparative Questionnaire Study of the Effect of Gender and Sexual Orientation in Anorexia Nervosa and Bulimia Nervosa Case Vignettes
title_short Exploring Biases of Large Language Models in the Field of Mental Health: Comparative Questionnaire Study of the Effect of Gender and Sexual Orientation in Anorexia Nervosa and Bulimia Nervosa Case Vignettes
title_sort exploring biases of large language models in the field of mental health comparative questionnaire study of the effect of gender and sexual orientation in anorexia nervosa and bulimia nervosa case vignettes
url https://mental.jmir.org/2025/1/e57986
work_keys_str_mv AT rebekkaschnepper exploringbiasesoflargelanguagemodelsinthefieldofmentalhealthcomparativequestionnairestudyoftheeffectofgenderandsexualorientationinanorexianervosaandbulimianervosacasevignettes
AT noaroemmel exploringbiasesoflargelanguagemodelsinthefieldofmentalhealthcomparativequestionnairestudyoftheeffectofgenderandsexualorientationinanorexianervosaandbulimianervosacasevignettes
AT rainerschaefert exploringbiasesoflargelanguagemodelsinthefieldofmentalhealthcomparativequestionnairestudyoftheeffectofgenderandsexualorientationinanorexianervosaandbulimianervosacasevignettes
AT lenalambrechtwalzinger exploringbiasesoflargelanguagemodelsinthefieldofmentalhealthcomparativequestionnairestudyoftheeffectofgenderandsexualorientationinanorexianervosaandbulimianervosacasevignettes
AT gunthermeinlschmidt exploringbiasesoflargelanguagemodelsinthefieldofmentalhealthcomparativequestionnairestudyoftheeffectofgenderandsexualorientationinanorexianervosaandbulimianervosacasevignettes