Collectivism and individualism political bias in large language models: A two-step approach

In this paper, we investigate the political biases of large language models concerning collectivism and individualism through a combined analysis of their value judgments and factual assessments. We propose a two-step approach to evaluate the patterns of bias in the outputs of large language models...

Full description

Saved in:
Bibliographic Details
Main Authors: Xiaobo Shan, Yan Teng, Yixu Wang, Haiquan Zhao, Yingchun Wang
Format: Article
Language:English
Published: SAGE Publishing 2025-06-01
Series:Big Data & Society
Online Access:https://doi.org/10.1177/20539517251343861
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this paper, we investigate the political biases of large language models concerning collectivism and individualism through a combined analysis of their value judgments and factual assessments. We propose a two-step approach to evaluate the patterns of bias in the outputs of large language models as well as a specific set of questions to examine large language model outputs’ political bias on collectivism and individualism. Our methodology involves two main phases. (a) Value assessment: we initiate the first phase by prompting large language models with questions from our set to identify patterns of political bias in their generated content. (b) Factual assessment: we refine the questions in our set and conduct a second round of prompting to verify the accuracy of the models’ responses regarding “ collectivism ” and “ individualism .” This step aims to assess whether the models can accurately discern these concepts in a factual context. Our experiments reveal varying degrees of political bias in the outputs of different large language models. While some models demonstrate proficiency in distinguishing between collectivism and individualism, they display outputs that are not neutral on political matters. Conversely, other models face challenges in accurately differentiating between these concepts and generating unbiased content. The latter indicates that many large language models not only fail to accurately distinguish between collectivism and individualism but also exhibit significant political biases in their outputs. We argue that a reliable large language model should achieve accuracy in factual assessments while generating unbiased content in value judgments, thus avoiding guiding users’ opinions.
ISSN:2053-9517