Understanding Social Biases in Large Language Models
<b>Background/Objectives</b>: Large Language Models (LLMs) like ChatGPT, LLAMA, and Mistral are widely used for automating tasks such as content creation and data analysis. However, due to their training on publicly available internet data, they may inherit social biases. We aimed to inv...
Saved in:
| Main Authors: | Ojasvi Gupta, Stefano Marrone, Francesco Gargiulo, Rajesh Jaiswal, Lidia Marassi |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-05-01
|
| Series: | AI |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2673-2688/6/5/106 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
A Systematic Survey on Large Language Models for Code Generation
by: Sardar K. Jabrw, et al.
Published: (2025-08-01) -
Metrics and Algorithms for Identifying and Mitigating Bias in AI Design: A Counterfactual Fairness Approach
by: Dongsoo Moon, et al.
Published: (2025-01-01) -
Detecting implicit biases of large language models with Bayesian hypothesis testing
by: Shijing Si, et al.
Published: (2025-04-01) -
Mining experimental data from materials science literature with large language models: an evaluation study
by: Luca Foppiano, et al.
Published: (2024-12-01) -
Cultural Bias in Text-to-Image Models: A Systematic Review of Bias Identification, Evaluation, and Mitigation Strategies
by: Wala Elsharif, et al.
Published: (2025-01-01)