GPT-3.5 altruistic advice is sensitive to reciprocal concerns but not to strategic risk

Abstract Pre-trained large language models (LLMs) have garnered significant attention for their ability to generate human-like text and responses across various domains. This study delves into examines the social and strategic behavior of the commonly used LLM GPT-3.5 by investigating its suggestion...

Full description

Saved in:
Bibliographic Details
Main Authors: Eva-Madeleine Schmidt, Sara Bonati, Nils Köbis, Ivan Soraperra
Format: Article
Language:English
Published: Nature Portfolio 2024-09-01
Series:Scientific Reports
Online Access:https://doi.org/10.1038/s41598-024-73306-x
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850163401299329024
author Eva-Madeleine Schmidt
Sara Bonati
Nils Köbis
Ivan Soraperra
author_facet Eva-Madeleine Schmidt
Sara Bonati
Nils Köbis
Ivan Soraperra
author_sort Eva-Madeleine Schmidt
collection DOAJ
description Abstract Pre-trained large language models (LLMs) have garnered significant attention for their ability to generate human-like text and responses across various domains. This study delves into examines the social and strategic behavior of the commonly used LLM GPT-3.5 by investigating its suggestions in well-established behavioral economics paradigms. Specifically, we focus on social preferences, including altruism, reciprocity, and fairness, in the context of two classic economic games: the Dictator Game (DG) and the Ultimatum Game (UG). Our research aims to answer three overarching questions: (1) To what extent do GPT-3.5 suggestions reflect human social preferences? (2) How do socio-demographic features of the advisee and (3) technical parameters of the model influence the suggestions of GPT-3.5? We present detailed empirical evidence from extensive experiments with GPT-3.5, analyzing its responses to various game scenarios while manipulating the demographics of the advisee and the model temperature. Our findings reveal that, in the DG Dictator Game, model suggestions are more altruistic than in humans. We further show that it also picks up on more subtle aspects of human social preferences: fairness and reciprocity. This research contributes to the ongoing exploration of AI-driven systems’ alignment with human behavior and social norms, providing valuable insights into the behavior of pre-trained LLMs and their implications for human-AI interactions. Additionally, our study offers a methodological benchmark for future research examining human-like characteristics and behaviors in language models.
format Article
id doaj-art-06e9a802c4424d1a8d55019be4180eff
institution OA Journals
issn 2045-2322
language English
publishDate 2024-09-01
publisher Nature Portfolio
record_format Article
series Scientific Reports
spelling doaj-art-06e9a802c4424d1a8d55019be4180eff2025-08-20T02:22:16ZengNature PortfolioScientific Reports2045-23222024-09-0114111310.1038/s41598-024-73306-xGPT-3.5 altruistic advice is sensitive to reciprocal concerns but not to strategic riskEva-Madeleine Schmidt0Sara Bonati1Nils Köbis2Ivan Soraperra3Center for Humans and Machines, Max Planck Institute for Human DevelopmentCenter for Humans and Machines, Max Planck Institute for Human DevelopmentCenter for Humans and Machines, Max Planck Institute for Human DevelopmentCenter for Humans and Machines, Max Planck Institute for Human DevelopmentAbstract Pre-trained large language models (LLMs) have garnered significant attention for their ability to generate human-like text and responses across various domains. This study delves into examines the social and strategic behavior of the commonly used LLM GPT-3.5 by investigating its suggestions in well-established behavioral economics paradigms. Specifically, we focus on social preferences, including altruism, reciprocity, and fairness, in the context of two classic economic games: the Dictator Game (DG) and the Ultimatum Game (UG). Our research aims to answer three overarching questions: (1) To what extent do GPT-3.5 suggestions reflect human social preferences? (2) How do socio-demographic features of the advisee and (3) technical parameters of the model influence the suggestions of GPT-3.5? We present detailed empirical evidence from extensive experiments with GPT-3.5, analyzing its responses to various game scenarios while manipulating the demographics of the advisee and the model temperature. Our findings reveal that, in the DG Dictator Game, model suggestions are more altruistic than in humans. We further show that it also picks up on more subtle aspects of human social preferences: fairness and reciprocity. This research contributes to the ongoing exploration of AI-driven systems’ alignment with human behavior and social norms, providing valuable insights into the behavior of pre-trained LLMs and their implications for human-AI interactions. Additionally, our study offers a methodological benchmark for future research examining human-like characteristics and behaviors in language models.https://doi.org/10.1038/s41598-024-73306-x
spellingShingle Eva-Madeleine Schmidt
Sara Bonati
Nils Köbis
Ivan Soraperra
GPT-3.5 altruistic advice is sensitive to reciprocal concerns but not to strategic risk
Scientific Reports
title GPT-3.5 altruistic advice is sensitive to reciprocal concerns but not to strategic risk
title_full GPT-3.5 altruistic advice is sensitive to reciprocal concerns but not to strategic risk
title_fullStr GPT-3.5 altruistic advice is sensitive to reciprocal concerns but not to strategic risk
title_full_unstemmed GPT-3.5 altruistic advice is sensitive to reciprocal concerns but not to strategic risk
title_short GPT-3.5 altruistic advice is sensitive to reciprocal concerns but not to strategic risk
title_sort gpt 3 5 altruistic advice is sensitive to reciprocal concerns but not to strategic risk
url https://doi.org/10.1038/s41598-024-73306-x
work_keys_str_mv AT evamadeleineschmidt gpt35altruisticadviceissensitivetoreciprocalconcernsbutnottostrategicrisk
AT sarabonati gpt35altruisticadviceissensitivetoreciprocalconcernsbutnottostrategicrisk
AT nilskobis gpt35altruisticadviceissensitivetoreciprocalconcernsbutnottostrategicrisk
AT ivansoraperra gpt35altruisticadviceissensitivetoreciprocalconcernsbutnottostrategicrisk