Do LLMs Exhibit Human-like Response Biases? A Case Study in Survey Design
Saved in:
| Main Authors: | Lindia Tjuatja, Valerie Chen, Tongshuang Wu, Ameet Talwalkwar, Graham Neubig |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
The MIT Press
2024-09-01
|
| Series: | Transactions of the Association for Computational Linguistics |
| Online Access: | http://dx.doi.org/10.1162/tacl_a_00685 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Detecting Human Bias in Emergency Triage Using LLMs
by: Marta Avalos, et al.
Published: (2024-05-01) -
What social stratifications in bias blind spot can tell us about implicit social bias in both LLMs and humans
by: Sarah V. Bentley, et al.
Published: (2025-08-01) -
Designing Social Robots with LLMs for Engaging Human Interaction
by: Maria Pinto-Bernal, et al.
Published: (2025-06-01) -
A framework for evaluating cultural bias and historical misconceptions in LLMs outputs
by: Moon-Kuen Mak, et al.
Published: (2025-09-01) -
Bridging the Gap: A Survey on Integrating (Human) Feedback for Natural Language Generation
by: Patrick Fernandes, et al.
Published: (2023-12-01)