Generalization bias in large language model summarization of scientific research
Artificial intelligence chatbots driven by large language models (LLMs) have the potential to increase public science literacy and support scientific research, as they can quickly summarize complex scientific information in accessible terms. However, when summarizing scientific texts, LLMs may omit...
Saved in:
| Main Authors: | Uwe Peters, Benjamin Chin-Yee |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
The Royal Society
2025-04-01
|
| Series: | Royal Society Open Science |
| Subjects: | |
| Online Access: | https://royalsocietypublishing.org/doi/10.1098/rsos.241776 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Detecting implicit biases of large language models with Bayesian hypothesis testing
by: Shijing Si, et al.
Published: (2025-04-01) -
Exploring the occupational biases and stereotypes of Chinese large language models
by: Leilei Jiang, et al.
Published: (2025-05-01) -
Mirroring Cultural Dominance: Disclosing Large Language Models Social Values, Attitudes and Stereotypes
by: Kristian Dokic, et al.
Published: (2025-05-01) -
La struttura interna dei segmenti: riflessioni sulla Teoria degli Elementi
by: Laura Bafile
Published: (2015-12-01) -
Risk of Bias Assessment of Diagnostic Accuracy Studies Using QUADAS 2 by Large Language Models
by: Daniel-Corneliu Leucuța, et al.
Published: (2025-06-01)