What social stratifications in bias blind spot can tell us about implicit social bias in both LLMs and humans
Abstract Large language models (LLMs) are the engines behind generative Artificial Intelligence (AI) applications, the most well-known being chatbots. As conversational agents, they—much like the humans on whose data they are trained—exhibit social bias. The nature of social bias is that it unfairly...
Saved in:
| Main Authors: | Sarah V. Bentley, David Evans, Claire K. Naughtin |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-08-01
|
| Series: | Scientific Reports |
| Online Access: | https://doi.org/10.1038/s41598-025-14875-3 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Detecting Human Bias in Emergency Triage Using LLMs
by: Marta Avalos, et al.
Published: (2024-05-01) -
Psychological and Social Factors in Jury Decision-Making: An Analysis of the Influence of Implicit Bias and Prejudice
by: João Miguel Alves Ferreira, et al.
Published: (2025-06-01) -
Editorial: Exploring implicit biases in the educational landscape
by: Nishtha Lamba, et al.
Published: (2024-10-01) -
Implicit prosody and contextual bias in silent reading
by: Kate McCurdy, et al.
Published: (2013-07-01) -
Ethical blind spots in leadership: addressing unconscious bias in post-COVID workforce management
by: Stephanie Bilderback
Published: (2025-06-01)