Assessing the quality of automatic-generated short answers using GPT-4
Open-ended assessments play a pivotal role in enabling instructors to evaluate student knowledge acquisition and provide constructive feedback. Integrating large language models (LLMs) such as GPT-4 in educational settings presents a transformative opportunity for assessment methodologies. However,...
Saved in:
| Main Authors: | Luiz Rodrigues, Filipe Dwan Pereira, Luciano Cabral, Dragan Gašević, Geber Ramalho, Rafael Ferreira Mello |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Elsevier
2024-12-01
|
| Series: | Computers and Education: Artificial Intelligence |
| Subjects: | |
| Online Access: | http://www.sciencedirect.com/science/article/pii/S2666920X24000511 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Is GPT-4 fair? An empirical analysis in automatic short answer grading
by: Luiz Rodrigues, et al.
Published: (2025-06-01) -
Cross-Encoder-Based Semantic Evaluation of Extractive and Generative Question Answering in Low-Resourced African Languages
by: Funebi Francis Ijebu, et al.
Published: (2025-03-01) -
Beyond Scores: A Modular RAG-Based System for Automatic Short Answer Scoring With Feedback
by: Menna Fateen, et al.
Published: (2024-01-01) -
A Region-based Approach to the Automated Marking of Short Textual Answers
by: Raheel Siddiqi
Published: (2011-12-01) -
Automatic question-answering modeling in English by integrating TF-IDF and segmentation algorithms
by: Hainan Wang
Published: (2024-12-01)