Employing large language models to enhance K-12 students’ programming debugging skills, computational thinking, and self-efficacy

The introduction of programming education in K-12 schools to promote computational thinking has attracted a great deal of attention from scholars and educators. Debugging code is a central skill for students, but is also a considerable challenge when learning to program. Learners at the K-12 level o...

Full description

Saved in:
Bibliographic Details
Main Author: Shu-Jie Chen, Xiaofen Shan, Ze-Min Liu, Chuang-Qi Chen
Format: Article
Language:English
Published: International Forum of Educational Technology & Society 2025-04-01
Series:Educational Technology & Society
Subjects:
Online Access:https://www.j-ets.net/collection/published-issues/28_2#h.njwqi1ffqtu2
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850146692892983296
author Shu-Jie Chen, Xiaofen Shan, Ze-Min Liu, Chuang-Qi Chen
author_facet Shu-Jie Chen, Xiaofen Shan, Ze-Min Liu, Chuang-Qi Chen
author_sort Shu-Jie Chen, Xiaofen Shan, Ze-Min Liu, Chuang-Qi Chen
collection DOAJ
description The introduction of programming education in K-12 schools to promote computational thinking has attracted a great deal of attention from scholars and educators. Debugging code is a central skill for students, but is also a considerable challenge when learning to program. Learners at the K-12 level often lack confidence in programming debugging due to a lack of effective learning feedback and programming fundamentals (e.g., correct syntax usage). With the development of technology, large language models (LLMs) provide new opportunities for novice programming debugging training. We proposed a method for incorporating an LLM into programming debugging training, and to test its validity, 80 K-12 students were selected to participate in a quasi-experiment with two groups to test its effectiveness. The results showed that through dialogic interaction with the model, students were able to solve programming problems more effectively and improve their ability to solve problems in real-world applications. Importantly, this dialogic interaction increased students’ confidence in their programming abilities, thus allowing them to maintain motivation for programming learning.
format Article
id doaj-art-86b3a976b35d4c6daab8ff18351cd51b
institution OA Journals
issn 1176-3647
1436-4522
language English
publishDate 2025-04-01
publisher International Forum of Educational Technology & Society
record_format Article
series Educational Technology & Society
spelling doaj-art-86b3a976b35d4c6daab8ff18351cd51b2025-08-20T02:27:46ZengInternational Forum of Educational Technology & SocietyEducational Technology & Society1176-36471436-45222025-04-01282259278https://doi.org/10.30191/ETS.202504_28(2).TP01Employing large language models to enhance K-12 students’ programming debugging skills, computational thinking, and self-efficacyShu-Jie Chen, Xiaofen Shan, Ze-Min Liu, Chuang-Qi ChenThe introduction of programming education in K-12 schools to promote computational thinking has attracted a great deal of attention from scholars and educators. Debugging code is a central skill for students, but is also a considerable challenge when learning to program. Learners at the K-12 level often lack confidence in programming debugging due to a lack of effective learning feedback and programming fundamentals (e.g., correct syntax usage). With the development of technology, large language models (LLMs) provide new opportunities for novice programming debugging training. We proposed a method for incorporating an LLM into programming debugging training, and to test its validity, 80 K-12 students were selected to participate in a quasi-experiment with two groups to test its effectiveness. The results showed that through dialogic interaction with the model, students were able to solve programming problems more effectively and improve their ability to solve problems in real-world applications. Importantly, this dialogic interaction increased students’ confidence in their programming abilities, thus allowing them to maintain motivation for programming learning.https://www.j-ets.net/collection/published-issues/28_2#h.njwqi1ffqtu2large language modelsgenerative artificial intelligencedebugging skillscomputational thinkingself-efficacyprogramming education
spellingShingle Shu-Jie Chen, Xiaofen Shan, Ze-Min Liu, Chuang-Qi Chen
Employing large language models to enhance K-12 students’ programming debugging skills, computational thinking, and self-efficacy
Educational Technology & Society
large language models
generative artificial intelligence
debugging skills
computational thinking
self-efficacy
programming education
title Employing large language models to enhance K-12 students’ programming debugging skills, computational thinking, and self-efficacy
title_full Employing large language models to enhance K-12 students’ programming debugging skills, computational thinking, and self-efficacy
title_fullStr Employing large language models to enhance K-12 students’ programming debugging skills, computational thinking, and self-efficacy
title_full_unstemmed Employing large language models to enhance K-12 students’ programming debugging skills, computational thinking, and self-efficacy
title_short Employing large language models to enhance K-12 students’ programming debugging skills, computational thinking, and self-efficacy
title_sort employing large language models to enhance k 12 students programming debugging skills computational thinking and self efficacy
topic large language models
generative artificial intelligence
debugging skills
computational thinking
self-efficacy
programming education
url https://www.j-ets.net/collection/published-issues/28_2#h.njwqi1ffqtu2
work_keys_str_mv AT shujiechenxiaofenshanzeminliuchuangqichen employinglargelanguagemodelstoenhancek12studentsprogrammingdebuggingskillscomputationalthinkingandselfefficacy