Who Is to Blame for the Bias in Visualizations, ChatGPT or DALL-E?
Due to range of factors in the development stage, generative artificial intelligence (AI) models cannot be completely free from bias. Some biases are introduced by the quality of training data, and developer influence during both design and training of the large language models (LLMs), while others...
Saved in:
| Main Author: | Dirk H. R. Spennemann |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-04-01
|
| Series: | AI |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2673-2688/6/5/92 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Surprising gender biases in GPT
by: Raluca Alexandra Fulgu, et al.
Published: (2024-12-01) -
Gender biases within Artificial Intelligence and ChatGPT: Evidence, Sources of Biases and Solutions
by: Jerlyn Q.H. Ho, et al.
Published: (2025-05-01) -
Is ChatGPT a friend or foe in the war on misinformation?
by: Burgert Senekal, et al.
Published: (2023-12-01) -
Investigating Potential Gender Differences in ChatGPT-Diagnosed Clinical Vignettes
by: Anjali Mediboina, et al.
Published: (2025-01-01) -
Measuring biases in AI-generated co-authorship networks
by: Ghazal Kalhor, et al.
Published: (2025-05-01)