LLM-CDM: A Large Language Model Enhanced Cognitive Diagnosis for Intelligent Education

Cognitive diagnosis is a key component of intelligent education to assess students’ comprehension of specific knowledge concepts. Current methodologies predominantly rely on students’ historical performance records and manually annotated knowledge concepts for analysis. However...

Full description

Saved in:
Bibliographic Details
Main Authors: Xin Chen, Jin Zhang, Tong Zhou, Feng Zhang
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10916617/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Cognitive diagnosis is a key component of intelligent education to assess students’ comprehension of specific knowledge concepts. Current methodologies predominantly rely on students’ historical performance records and manually annotated knowledge concepts for analysis. However, the extensive semantic information embedded in exercises, including latent knowledge concepts, has not been fully utilized. This paper presents a novel cognitive diagnosis model based on the LLAMA3-70B framework (referred to as LLM-CDM), which integrates prompt engineering with the rich semantic information inherent in exercise texts to uncover latent knowledge concepts and improve diagnostic accuracy. Specifically, this study first inputs exercise texts into a large language model and develops an innovative prompting method to facilitate deep mining of implicit knowledge concepts within these texts by the model. Following the integration of these newly extracted knowledge concepts into the existing Q matrix, this paper employs a neural network to diagnose students’ understanding of knowledge concepts while applying the monotonicity assumption to ensure the interpretability of model factors. Experimental results from an examination data set for course completion assessments demonstrate that LLM-CDM exhibits superior performance in both accuracy and explainability.
ISSN:2169-3536