The influence of mental state attributions on trust in large language models
Abstract Rapid advances in artificial intelligence (AI) have led users to believe that systems such as large language models (LLMs) have mental states, including the capacity for ‘experience’ (e.g., emotions and consciousness). These folk-psychological attributions often diverge from expert opinion...
Saved in:
| Main Authors: | Clara Colombatto, Jonathan Birch, Stephen M. Fleming |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-05-01
|
| Series: | Communications Psychology |
| Online Access: | https://doi.org/10.1038/s44271-025-00262-1 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Real Estate Attribute Value Extraction Using Large Language Models
by: Michal Kvet, et al.
Published: (2025-01-01) -
The Applications of Large Language Models in Mental Health: Scoping Review
by: Yu Jin, et al.
Published: (2025-05-01) -
Scheme of Attribute-Based Disclosure Mobile Trust Negotiation
by: Qiuyun Wang, et al.
Published: (2013-10-01) -
Assessing and alleviating state anxiety in large language models
by: Ziv Ben-Zion, et al.
Published: (2025-03-01) -
The neural bases of directed and spontaneous mental state attributions to group agents.
by: Adrianna C Jenkins, et al.
Published: (2014-01-01)