A publicly available benchmark for assessing large language models’ ability to predict how humans balance self-interest and the interest of others
Abstract Large language models (LLMs) hold enormous potential to assist humans in decision-making processes, from everyday to high-stake scenarios. However, as many human decisions carry social implications, for LLMs to be reliable assistants a necessary prerequisite is that they are able to capture...
Saved in:
| Main Authors: | Valerio Capraro, Roberto Di Paolo, Veronica Pizziol |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-07-01
|
| Series: | Scientific Reports |
| Subjects: | |
| Online Access: | https://doi.org/10.1038/s41598-025-01715-7 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Reasons for Doing Good: Behavioural Explanations of Prosociality in Economics
by: Magdalena Adamus
Published: (2017-06-01) -
Seemingly altruistic behavior and strategic ignorance in a dictator game with potential loss
by: Keisuke Yamamoto, et al.
Published: (2025-01-01) -
Sharing electricity over money: Third-person perspectives on human-robot dictator game outcomes
by: Andreea E. Potinteu, et al.
Published: (2025-03-01) -
Generous Attitudes and Online Participation
by: Floor Fiers, et al.
Published: (2021-04-01) -
The machine psychology of cooperation: can GPT models operationalize prompts for altruism, cooperation, competitiveness, and selfishness in economic games?
by: Steve Phelps, et al.
Published: (2025-01-01)