Benchmarking bias in embeddings of healthcare AI models: using SD-WEAT for detection and measurement across sensitive populations
Abstract Background Artificial intelligence (AI) has been shown to exhibit and perpetuate human biases; recent research efforts have focused on measuring bias within the input embeddings of AI language models, especially with non-binary classifications that are common in medicine and healthcare scen...
Saved in:
| Main Authors: | Magnus Gray, Leihong Wu |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
BMC
2025-07-01
|
| Series: | BMC Medical Informatics and Decision Making |
| Subjects: | |
| Online Access: | https://doi.org/10.1186/s12911-025-03102-8 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Whose voice matters? Word embeddings reveal identity bias in news quotes
by: Nnaemeka Ohamadike, et al.
Published: (2025-04-01) -
Slovene and Croatian word embeddings in terms of gender occupational analogies
by: Matej Ulčar, et al.
Published: (2021-07-01) -
AI bias in lung cancer radiotherapy
by: Kai Ding, et al.
Published: (2024-11-01) -
Biased and Biasing: The Hidden Bias Cascade and Bias Snowball Effects
by: Itiel E. Dror
Published: (2025-04-01) -
IterDBR: Iterative Generative Dataset Bias Reduction Framework for NLU Models
by: Xiaoyue Wang, et al.
Published: (2025-01-01)