Static network structure cannot stabilize cooperation among large language model agents.
Large language models (LLMs) are increasingly used to model human social behavior, with recent research exploring their ability to simulate social dynamics. Here, we test whether LLMs mirror human behavior in social dilemmas, where individual and collective interests conflict. Humans generally coope...
Saved in:
| Main Authors: | Jin Han, Balaraju Battu, Ivan Romić, Talal Rahwan, Petter Holme |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Public Library of Science (PLoS)
2025-01-01
|
| Series: | PLoS ONE |
| Online Access: | https://doi.org/10.1371/journal.pone.0320094 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Rewards and punishments help humans overcome biases against cooperation partners assumed to be machines
by: Kinga Makovi, et al.
Published: (2025-07-01) -
Commentary: You cannot fix what you cannot seeCentral Message
by: Chris C. Cook, MD, et al.
Published: (2020-09-01) -
A Systematic Survey on Large Language Models for Static Code Analysis
by: Hekar A. Mohammed Salih, et al.
Published: (2025-06-01) -
LLM Hallucination: The Curse That Cannot Be Broken
by: Hussein Al-Mahmood
Published: (2025-08-01) -
Citations or Likes: this dilemma cannot exist
by: Dov Goldenberg
Published: (2022-03-01)