How do people react to political bias in generative artificial intelligence (AI)?
Generative Artificial Intelligence (GAI) such as Large Language Models (LLMs) have a concerning tendency to generate politically biased content. This is a challenge, as the emergence of GAI meets politically polarized societies. Therefore, this research investigates how people react to biased GAI-co...
Saved in:
| Main Author: | |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Elsevier
2025-03-01
|
| Series: | Computers in Human Behavior: Artificial Humans |
| Subjects: | |
| Online Access: | http://www.sciencedirect.com/science/article/pii/S2949882124000689 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850180916012384256 |
|---|---|
| author | Uwe Messer |
| author_facet | Uwe Messer |
| author_sort | Uwe Messer |
| collection | DOAJ |
| description | Generative Artificial Intelligence (GAI) such as Large Language Models (LLMs) have a concerning tendency to generate politically biased content. This is a challenge, as the emergence of GAI meets politically polarized societies. Therefore, this research investigates how people react to biased GAI-content based on their pre-existing political beliefs and how this influences the acceptance of GAI. In three experiments (N = 513), it was found that perceived alignment between user's political orientation and bias in generated content (in text and images) increases acceptance and reliance on GAI. Participants who perceived alignment were more likely to grant GAI access to sensitive smartphone functions and to endorse the use in critical domains (e.g., loan approval; social media moderation). Because users see GAI as a social actor, they consider perceived alignment as a sign of greater objectivity, thus granting aligned GAI access to more sensitive areas. |
| format | Article |
| id | doaj-art-c8da76e576cf4714b7fcfdc6334d01c6 |
| institution | OA Journals |
| issn | 2949-8821 |
| language | English |
| publishDate | 2025-03-01 |
| publisher | Elsevier |
| record_format | Article |
| series | Computers in Human Behavior: Artificial Humans |
| spelling | doaj-art-c8da76e576cf4714b7fcfdc6334d01c62025-08-20T02:18:00ZengElsevierComputers in Human Behavior: Artificial Humans2949-88212025-03-01310010810.1016/j.chbah.2024.100108How do people react to political bias in generative artificial intelligence (AI)?Uwe Messer0Universität der Bundeswehr München, Werner-Heisenberg-Weg 3985577 Neubiberg, GermanyGenerative Artificial Intelligence (GAI) such as Large Language Models (LLMs) have a concerning tendency to generate politically biased content. This is a challenge, as the emergence of GAI meets politically polarized societies. Therefore, this research investigates how people react to biased GAI-content based on their pre-existing political beliefs and how this influences the acceptance of GAI. In three experiments (N = 513), it was found that perceived alignment between user's political orientation and bias in generated content (in text and images) increases acceptance and reliance on GAI. Participants who perceived alignment were more likely to grant GAI access to sensitive smartphone functions and to endorse the use in critical domains (e.g., loan approval; social media moderation). Because users see GAI as a social actor, they consider perceived alignment as a sign of greater objectivity, thus granting aligned GAI access to more sensitive areas.http://www.sciencedirect.com/science/article/pii/S2949882124000689Artificial intelligenceAlignmentPolitical orientationBiasAcceptanceLarge language model |
| spellingShingle | Uwe Messer How do people react to political bias in generative artificial intelligence (AI)? Computers in Human Behavior: Artificial Humans Artificial intelligence Alignment Political orientation Bias Acceptance Large language model |
| title | How do people react to political bias in generative artificial intelligence (AI)? |
| title_full | How do people react to political bias in generative artificial intelligence (AI)? |
| title_fullStr | How do people react to political bias in generative artificial intelligence (AI)? |
| title_full_unstemmed | How do people react to political bias in generative artificial intelligence (AI)? |
| title_short | How do people react to political bias in generative artificial intelligence (AI)? |
| title_sort | how do people react to political bias in generative artificial intelligence ai |
| topic | Artificial intelligence Alignment Political orientation Bias Acceptance Large language model |
| url | http://www.sciencedirect.com/science/article/pii/S2949882124000689 |
| work_keys_str_mv | AT uwemesser howdopeoplereacttopoliticalbiasingenerativeartificialintelligenceai |