Selective agreement, not sycophancy: investigating opinion dynamics in LLM interactions

Abstract Understanding how opinions evolve is essential for addressing phenomena such as polarization, radicalization, and consensus formation. In this work, we investigate how language shapes opinion dynamics among Large Language Model (LLM) agents by simulating multi-round debates.Using our framew...

Full description

Saved in:
Bibliographic Details
Main Authors: Erica Cau, Valentina Pansanella, Dino Pedreschi, Giulio Rossetti
Format: Article
Language:English
Published: SpringerOpen 2025-08-01
Series:EPJ Data Science
Subjects:
Online Access:https://doi.org/10.1140/epjds/s13688-025-00579-1
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Understanding how opinions evolve is essential for addressing phenomena such as polarization, radicalization, and consensus formation. In this work, we investigate how language shapes opinion dynamics among Large Language Model (LLM) agents by simulating multi-round debates.Using our framework, we find that agent populations consistently converge toward agreement, not through sycophancy or blind conformity, but via a structured and asymmetric persuasion process. Agents are more likely to accept, and thus be persuaded by, opinions that are more agreeable relative to the discussion framing, revealing a directional bias in how opinions evolve. LLM agents selectively adopt peers’ views, showing neither bounded confidence nor indiscriminate agreement. Moreover, agents frequently produce fallacious arguments, and are significantly influenced by them: logical fallacies, especially those of relevance and credibility, play a measurable role in driving opinion change. These results not only uncover emergent behaviours in agents’ dynamics, but also highlight the dual role of LLMs as both generators and victims of flawed reasoning, raising important considerations for their deployment in socially sensitive contexts.
ISSN:2193-1127