The influence of mental state attributions on trust in large language models
Abstract Rapid advances in artificial intelligence (AI) have led users to believe that systems such as large language models (LLMs) have mental states, including the capacity for ‘experience’ (e.g., emotions and consciousness). These folk-psychological attributions often diverge from expert opinion...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-05-01
|
| Series: | Communications Psychology |
| Online Access: | https://doi.org/10.1038/s44271-025-00262-1 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850124444942467072 |
|---|---|
| author | Clara Colombatto Jonathan Birch Stephen M. Fleming |
| author_facet | Clara Colombatto Jonathan Birch Stephen M. Fleming |
| author_sort | Clara Colombatto |
| collection | DOAJ |
| description | Abstract Rapid advances in artificial intelligence (AI) have led users to believe that systems such as large language models (LLMs) have mental states, including the capacity for ‘experience’ (e.g., emotions and consciousness). These folk-psychological attributions often diverge from expert opinion and are distinct from attributions of ‘intelligence’ (e.g., reasoning, planning), and yet may affect trust in AI systems. While past work provides some support for a link between anthropomorphism and trust, the impact of attributions of consciousness and other aspects of mentality on user trust remains unclear. We explored this in a preregistered experiment (N = 410) in which participants rated the capacity of an LLM to exhibit consciousness and a variety of other mental states. They then completed a decision-making task where they could revise their choices based on the advice of an LLM. Bayesian analyses revealed strong evidence against a positive correlation between attributions of consciousness and advice-taking; indeed, a dimension of mental states related to experience showed a negative relationship with advice-taking, while attributions of intelligence were strongly correlated with advice acceptance. These findings highlight how users’ attitudes and behaviours are shaped by sophisticated intuitions about the capacities of LLMs—with different aspects of mental state attribution predicting people’s trust in these systems. |
| format | Article |
| id | doaj-art-a7f33ca40b714447ad15131c0da7d7e8 |
| institution | OA Journals |
| issn | 2731-9121 |
| language | English |
| publishDate | 2025-05-01 |
| publisher | Nature Portfolio |
| record_format | Article |
| series | Communications Psychology |
| spelling | doaj-art-a7f33ca40b714447ad15131c0da7d7e82025-08-20T02:34:19ZengNature PortfolioCommunications Psychology2731-91212025-05-01311710.1038/s44271-025-00262-1The influence of mental state attributions on trust in large language modelsClara Colombatto0Jonathan Birch1Stephen M. Fleming2Department of Psychology, University of WaterlooDepartment of Philosophy, Logic and Scientific Method, and Centre for Philosophy of Natural and Social Science, London School of Economics and Political ScienceDepartment of Experimental Psychology and Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College LondonAbstract Rapid advances in artificial intelligence (AI) have led users to believe that systems such as large language models (LLMs) have mental states, including the capacity for ‘experience’ (e.g., emotions and consciousness). These folk-psychological attributions often diverge from expert opinion and are distinct from attributions of ‘intelligence’ (e.g., reasoning, planning), and yet may affect trust in AI systems. While past work provides some support for a link between anthropomorphism and trust, the impact of attributions of consciousness and other aspects of mentality on user trust remains unclear. We explored this in a preregistered experiment (N = 410) in which participants rated the capacity of an LLM to exhibit consciousness and a variety of other mental states. They then completed a decision-making task where they could revise their choices based on the advice of an LLM. Bayesian analyses revealed strong evidence against a positive correlation between attributions of consciousness and advice-taking; indeed, a dimension of mental states related to experience showed a negative relationship with advice-taking, while attributions of intelligence were strongly correlated with advice acceptance. These findings highlight how users’ attitudes and behaviours are shaped by sophisticated intuitions about the capacities of LLMs—with different aspects of mental state attribution predicting people’s trust in these systems.https://doi.org/10.1038/s44271-025-00262-1 |
| spellingShingle | Clara Colombatto Jonathan Birch Stephen M. Fleming The influence of mental state attributions on trust in large language models Communications Psychology |
| title | The influence of mental state attributions on trust in large language models |
| title_full | The influence of mental state attributions on trust in large language models |
| title_fullStr | The influence of mental state attributions on trust in large language models |
| title_full_unstemmed | The influence of mental state attributions on trust in large language models |
| title_short | The influence of mental state attributions on trust in large language models |
| title_sort | influence of mental state attributions on trust in large language models |
| url | https://doi.org/10.1038/s44271-025-00262-1 |
| work_keys_str_mv | AT claracolombatto theinfluenceofmentalstateattributionsontrustinlargelanguagemodels AT jonathanbirch theinfluenceofmentalstateattributionsontrustinlargelanguagemodels AT stephenmfleming theinfluenceofmentalstateattributionsontrustinlargelanguagemodels AT claracolombatto influenceofmentalstateattributionsontrustinlargelanguagemodels AT jonathanbirch influenceofmentalstateattributionsontrustinlargelanguagemodels AT stephenmfleming influenceofmentalstateattributionsontrustinlargelanguagemodels |