Promises and breakages of automated grading systems: a qualitative study in computer science education
Automated grading systems (AGSs) have gained attention for their potential to streamline assessment in higher education. However, their integration into university assessment practice poses challenges, particularly for teachers in computer science seeking to balance their workload while ensuring an...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Taylor & Francis Group
2025-02-01
|
| Series: | Education Inquiry |
| Subjects: | |
| Online Access: | https://www.tandfonline.com/doi/10.1080/20004508.2025.2464996 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Automated grading systems (AGSs) have gained attention for their potential to streamline assessment in higher education. However, their integration into university assessment practice poses challenges, particularly for teachers in computer science seeking to balance their workload while ensuring an adequate and fair assessment of students’ programming skills and knowledge. The present study focuses on individuals with expertise in developing, using, and researching AGSs in higher education, whom we refer to as “AGS experts”. Through semi-structured interviews, we examine how the AGSs they engage with impact their work and assessment practices in computer science education. Drawing on the concept of breakages, we argue that while AGS experts invest time and effort in developing these systems, enticed by the promises of more efficient workload management and improved assessment practices, the actual use may introduce tensions leading to breakages disrupting assessment practices. Our findings illustrate the complexities and the potential impact the deployment of AGS brings to assessment practices within a public university setting and discuss the implications for future research. |
|---|---|
| ISSN: | 2000-4508 |