Potential to perpetuate social biases in health care by Chinese large language models: a model evaluation study
Abstract Background Large language models (LLMs) may perpetuate or amplify social biases toward patients. We systematically assessed potential biases of three popular Chinese LLMs in clinical application scenarios. Methods We tested whether Qwen, Erine, and Baichuan encode social biases for patients...
Saved in:
| Main Authors: | Chenxi Liu, Jianing Zheng, Yushu Liu, Xi Wang, Yuting Zhang, Qiang Fu, Wenwen Yu, Ting Yu, Wang Jiang, Dan Wang, Chaojie Liu |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
BMC
2025-07-01
|
| Series: | International Journal for Equity in Health |
| Subjects: | |
| Online Access: | https://doi.org/10.1186/s12939-025-02581-5 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Exploring the occupational biases and stereotypes of Chinese large language models
by: Leilei Jiang, et al.
Published: (2025-05-01) -
Generalization bias in large language model summarization of scientific research
by: Uwe Peters, et al.
Published: (2025-04-01) -
Detecting implicit biases of large language models with Bayesian hypothesis testing
by: Shijing Si, et al.
Published: (2025-04-01) -
IterDBR: Iterative Generative Dataset Bias Reduction Framework for NLU Models
by: Xiaoyue Wang, et al.
Published: (2025-01-01) -
More is more: Addition bias in large language models
by: Luca Santagata, et al.
Published: (2025-03-01)