Static network structure cannot stabilize cooperation among large language model agents.

Large language models (LLMs) are increasingly used to model human social behavior, with recent research exploring their ability to simulate social dynamics. Here, we test whether LLMs mirror human behavior in social dilemmas, where individual and collective interests conflict. Humans generally coope...

Full description

Saved in:
Bibliographic Details
Main Authors: Jin Han, Balaraju Battu, Ivan Romić, Talal Rahwan, Petter Holme
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2025-01-01
Series:PLoS ONE
Online Access:https://doi.org/10.1371/journal.pone.0320094
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849325348460691456
author Jin Han
Balaraju Battu
Ivan Romić
Talal Rahwan
Petter Holme
author_facet Jin Han
Balaraju Battu
Ivan Romić
Talal Rahwan
Petter Holme
author_sort Jin Han
collection DOAJ
description Large language models (LLMs) are increasingly used to model human social behavior, with recent research exploring their ability to simulate social dynamics. Here, we test whether LLMs mirror human behavior in social dilemmas, where individual and collective interests conflict. Humans generally cooperate more than expected in laboratory settings, showing less cooperation in well-mixed populations but more in fixed networks. In contrast, LLMs tend to exhibit greater cooperation in well-mixed settings. This raises a key question: Are LLMs about to emulate human behavior in cooperative dilemmas on networks? In this study, we examine networked interactions where agents repeatedly engage in the Prisoner's Dilemma within both well-mixed and structured network configurations, aiming to identify parallels in cooperative behavior between LLMs and humans. Our findings indicate critical distinctions: while humans tend to cooperate more within structured networks, LLMs display increased cooperation mainly in well-mixed environments, with limited adjustment to networked contexts. Notably, LLM cooperation also varies across model types, illustrating the complexities of replicating human-like social adaptability in artificial agents. These results highlight a crucial gap: LLMs struggle to emulate the nuanced, adaptive social strategies humans deploy in fixed networks. Unlike human participants, LLMs do not alter their cooperative behavior in response to network structures or evolving social contexts, missing the reciprocity norms that humans adaptively employ. This limitation points to a fundamental need in future LLM design-to integrate a deeper comprehension of social norms, enabling more authentic modeling of human-like cooperation and adaptability in networked environments.
format Article
id doaj-art-4e5911ad0c034978b57c1cb65eef2ff8
institution Kabale University
issn 1932-6203
language English
publishDate 2025-01-01
publisher Public Library of Science (PLoS)
record_format Article
series PLoS ONE
spelling doaj-art-4e5911ad0c034978b57c1cb65eef2ff82025-08-20T03:48:27ZengPublic Library of Science (PLoS)PLoS ONE1932-62032025-01-01205e032009410.1371/journal.pone.0320094Static network structure cannot stabilize cooperation among large language model agents.Jin HanBalaraju BattuIvan RomićTalal RahwanPetter HolmeLarge language models (LLMs) are increasingly used to model human social behavior, with recent research exploring their ability to simulate social dynamics. Here, we test whether LLMs mirror human behavior in social dilemmas, where individual and collective interests conflict. Humans generally cooperate more than expected in laboratory settings, showing less cooperation in well-mixed populations but more in fixed networks. In contrast, LLMs tend to exhibit greater cooperation in well-mixed settings. This raises a key question: Are LLMs about to emulate human behavior in cooperative dilemmas on networks? In this study, we examine networked interactions where agents repeatedly engage in the Prisoner's Dilemma within both well-mixed and structured network configurations, aiming to identify parallels in cooperative behavior between LLMs and humans. Our findings indicate critical distinctions: while humans tend to cooperate more within structured networks, LLMs display increased cooperation mainly in well-mixed environments, with limited adjustment to networked contexts. Notably, LLM cooperation also varies across model types, illustrating the complexities of replicating human-like social adaptability in artificial agents. These results highlight a crucial gap: LLMs struggle to emulate the nuanced, adaptive social strategies humans deploy in fixed networks. Unlike human participants, LLMs do not alter their cooperative behavior in response to network structures or evolving social contexts, missing the reciprocity norms that humans adaptively employ. This limitation points to a fundamental need in future LLM design-to integrate a deeper comprehension of social norms, enabling more authentic modeling of human-like cooperation and adaptability in networked environments.https://doi.org/10.1371/journal.pone.0320094
spellingShingle Jin Han
Balaraju Battu
Ivan Romić
Talal Rahwan
Petter Holme
Static network structure cannot stabilize cooperation among large language model agents.
PLoS ONE
title Static network structure cannot stabilize cooperation among large language model agents.
title_full Static network structure cannot stabilize cooperation among large language model agents.
title_fullStr Static network structure cannot stabilize cooperation among large language model agents.
title_full_unstemmed Static network structure cannot stabilize cooperation among large language model agents.
title_short Static network structure cannot stabilize cooperation among large language model agents.
title_sort static network structure cannot stabilize cooperation among large language model agents
url https://doi.org/10.1371/journal.pone.0320094
work_keys_str_mv AT jinhan staticnetworkstructurecannotstabilizecooperationamonglargelanguagemodelagents
AT balarajubattu staticnetworkstructurecannotstabilizecooperationamonglargelanguagemodelagents
AT ivanromic staticnetworkstructurecannotstabilizecooperationamonglargelanguagemodelagents
AT talalrahwan staticnetworkstructurecannotstabilizecooperationamonglargelanguagemodelagents
AT petterholme staticnetworkstructurecannotstabilizecooperationamonglargelanguagemodelagents