Is GPT-4 fair? An empirical analysis in automatic short answer grading
Short open-ended questions represent a central resource in formative and summative assessments both face-to-face and online settings, ranging from elementary to higher education. However, grading these questions remains challenging for instructors, raising attention to the field of Automatic Short A...
Saved in:
| Main Authors: | Luiz Rodrigues, Cleon Xavier, Newarney Costa, Dragan Gasevic, Rafael Ferreira Mello |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Elsevier
2025-06-01
|
| Series: | Computers and Education: Artificial Intelligence |
| Subjects: | |
| Online Access: | http://www.sciencedirect.com/science/article/pii/S2666920X25000682 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Assessing the quality of automatic-generated short answers using GPT-4
by: Luiz Rodrigues, et al.
Published: (2024-12-01) -
Beyond Scores: A Modular RAG-Based System for Automatic Short Answer Scoring With Feedback
by: Menna Fateen, et al.
Published: (2024-01-01) -
Japanese Short Answer Grading for Japanese Language Learners Using the Contextual Representation of BERT
by: Dyah Lalita Luhurkinanti, et al.
Published: (2025-01-01) -
GPT-4 generated answer rationales to multiple choice assessment questions in undergraduate medical education
by: Peter Y. Ch’en, et al.
Published: (2025-03-01) -
Automatic question-answering modeling in English by integrating TF-IDF and segmentation algorithms
by: Hainan Wang
Published: (2024-12-01)