Who Is to Blame for the Bias in Visualizations, ChatGPT or DALL-E?
Due to range of factors in the development stage, generative artificial intelligence (AI) models cannot be completely free from bias. Some biases are introduced by the quality of training data, and developer influence during both design and training of the large language models (LLMs), while others...
Saved in:
| Main Author: | |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-04-01
|
| Series: | AI |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2673-2688/6/5/92 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849327676222865408 |
|---|---|
| author | Dirk H. R. Spennemann |
| author_facet | Dirk H. R. Spennemann |
| author_sort | Dirk H. R. Spennemann |
| collection | DOAJ |
| description | Due to range of factors in the development stage, generative artificial intelligence (AI) models cannot be completely free from bias. Some biases are introduced by the quality of training data, and developer influence during both design and training of the large language models (LLMs), while others are introduced in the text-to-image (T2I) visualization programs. The bias and initialization at the interface between LLMs and T2I applications has not been examined to date. This study analyzes 770 images of librarians and curators generated by DALL-E from ChatGPT-4o prompts to investigate the source of gender, ethnicity, and age biases in these visualizations. Comparing prompts generated by ChatGPT-4o with DALL-E’s visual interpretations, the research demonstrates that DALL-E primarily introduces biases when ChatGPT-4o provides non-specific prompts. This highlights the potential for generative AI to perpetuate and amplify harmful stereotypes related to gender, age, and ethnicity in professional roles. |
| format | Article |
| id | doaj-art-e70882bf12ba4041aece6dbec7e078e3 |
| institution | Kabale University |
| issn | 2673-2688 |
| language | English |
| publishDate | 2025-04-01 |
| publisher | MDPI AG |
| record_format | Article |
| series | AI |
| spelling | doaj-art-e70882bf12ba4041aece6dbec7e078e32025-08-20T03:47:48ZengMDPI AGAI2673-26882025-04-01659210.3390/ai6050092Who Is to Blame for the Bias in Visualizations, ChatGPT or DALL-E?Dirk H. R. Spennemann0School of Agricultural, Environmental and Veterinary Sciences, Charles Sturt University, Albury, NSW 2640, AustraliaDue to range of factors in the development stage, generative artificial intelligence (AI) models cannot be completely free from bias. Some biases are introduced by the quality of training data, and developer influence during both design and training of the large language models (LLMs), while others are introduced in the text-to-image (T2I) visualization programs. The bias and initialization at the interface between LLMs and T2I applications has not been examined to date. This study analyzes 770 images of librarians and curators generated by DALL-E from ChatGPT-4o prompts to investigate the source of gender, ethnicity, and age biases in these visualizations. Comparing prompts generated by ChatGPT-4o with DALL-E’s visual interpretations, the research demonstrates that DALL-E primarily introduces biases when ChatGPT-4o provides non-specific prompts. This highlights the potential for generative AI to perpetuate and amplify harmful stereotypes related to gender, age, and ethnicity in professional roles.https://www.mdpi.com/2673-2688/6/5/92artificial intelligenceethnic biasgender biaslarge language modelstext-to-imageprofessions |
| spellingShingle | Dirk H. R. Spennemann Who Is to Blame for the Bias in Visualizations, ChatGPT or DALL-E? AI artificial intelligence ethnic bias gender bias large language models text-to-image professions |
| title | Who Is to Blame for the Bias in Visualizations, ChatGPT or DALL-E? |
| title_full | Who Is to Blame for the Bias in Visualizations, ChatGPT or DALL-E? |
| title_fullStr | Who Is to Blame for the Bias in Visualizations, ChatGPT or DALL-E? |
| title_full_unstemmed | Who Is to Blame for the Bias in Visualizations, ChatGPT or DALL-E? |
| title_short | Who Is to Blame for the Bias in Visualizations, ChatGPT or DALL-E? |
| title_sort | who is to blame for the bias in visualizations chatgpt or dall e |
| topic | artificial intelligence ethnic bias gender bias large language models text-to-image professions |
| url | https://www.mdpi.com/2673-2688/6/5/92 |
| work_keys_str_mv | AT dirkhrspennemann whoistoblameforthebiasinvisualizationschatgptordalle |