Inter-Annotator Agreement and Its Reflection in LLMs and Responsible AI.
Recent research on Responsible AI, particularly in addressing algorithmic biases, has gained significant attention. Natural Language Processing (NLP) algorithms, which rely on human-generated and human-labeled data, often reflect these challenges. In this paper, we analyze inter-annotator agreement...
Saved in:
| Main Authors: | Amir Toliyat, Elena Filatova, Ronak Etemadpour |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
LibraryPress@UF
2025-05-01
|
| Series: | Proceedings of the International Florida Artificial Intelligence Research Society Conference |
| Online Access: | https://journals.flvc.org/FLAIRS/article/view/139049 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
The Unified and Holistic Method Gamma (γ) for Inter-Annotator Agreement Measure and Alignment
by: Yann Mathet, et al.
Published: (2021-03-01) -
What Determines Inter-Coder Agreement in Manual Annotations? A Meta-Analytic Investigation
by: Petra Saskia Bayerl, et al.
Published: (2021-03-01) -
LLMs in Action: Robust Metrics for Evaluating Automated Ontology Annotation Systems
by: Ali Noori, et al.
Published: (2025-03-01) -
From Annotator Agreement to Noise Models
by: Beata Beigman Klebanov, et al.
Published: (2021-03-01) -
AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap
by: Q. Vera Liao, et al.
Published: (2024-05-01)