Japanese Short Answer Grading for Japanese Language Learners Using the Contextual Representation of BERT
The automatization of grading short answers in examinations aims to help teachers grade more efficiently and fairly. The Japanese SIMPLE-O attempts to grade Japanese language learners’ short answers using a dataset from a real examination. Bidirectional encoder representations from transf...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10849551/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832576790225223680 |
---|---|
author | Dyah Lalita Luhurkinanti Prima Dewi Purnamasari Takashi Tsunakawa Anak Agung Putri Ratna |
author_facet | Dyah Lalita Luhurkinanti Prima Dewi Purnamasari Takashi Tsunakawa Anak Agung Putri Ratna |
author_sort | Dyah Lalita Luhurkinanti |
collection | DOAJ |
description | The automatization of grading short answers in examinations aims to help teachers grade more efficiently and fairly. The Japanese SIMPLE-O attempts to grade Japanese language learners’ short answers using a dataset from a real examination. Bidirectional encoder representations from transformer (BERT), which has shown potential for natural language processing (NLP) tasks, is implemented to grade answers without fine-tuning due to the small amount of data. Two experiments are conducted in this study. The first experiment attempts to grade based on similarities, while the second classifies the answers as either correct or incorrect. Five BERT models are tested in the system, and two additional sentence BERT (SBERT) and RoBERTa models are tested for the similarity problem. The best Pearson’s correlation for grading with similarities is obtained with the Tohoku BERT Base. The use of hiragana-kanji conversion improves the correlation to 0.615 for BERT and 0.593 for SBERT but does not show much improvement for RoBERTa. In the binary classification experiments, all models have an accuracy above 90%, with Tohoku BERT Large having the best performance. Even without fine-tuning, BERT can be used as an embedding method to perform binary classification with high accuracy. |
format | Article |
id | doaj-art-df3e04cf39464a0da6bb2202e1b888ef |
institution | Kabale University |
issn | 2169-3536 |
language | English |
publishDate | 2025-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj-art-df3e04cf39464a0da6bb2202e1b888ef2025-01-31T00:02:00ZengIEEEIEEE Access2169-35362025-01-0113171951720710.1109/ACCESS.2025.353265910849551Japanese Short Answer Grading for Japanese Language Learners Using the Contextual Representation of BERTDyah Lalita Luhurkinanti0https://orcid.org/0000-0001-5669-6914Prima Dewi Purnamasari1https://orcid.org/0000-0002-5851-1984Takashi Tsunakawa2https://orcid.org/0000-0002-3880-6099Anak Agung Putri Ratna3Department of Electrical Engineering, Faculty of Engineering, Universitas Indonesia, Depok, IndonesiaDepartment of Electrical Engineering, Faculty of Engineering, Universitas Indonesia, Depok, IndonesiaFaculty of Informatics, Shizuoka University, Shizuoka, JapanDepartment of Electrical Engineering, Faculty of Engineering, Universitas Indonesia, Depok, IndonesiaThe automatization of grading short answers in examinations aims to help teachers grade more efficiently and fairly. The Japanese SIMPLE-O attempts to grade Japanese language learners’ short answers using a dataset from a real examination. Bidirectional encoder representations from transformer (BERT), which has shown potential for natural language processing (NLP) tasks, is implemented to grade answers without fine-tuning due to the small amount of data. Two experiments are conducted in this study. The first experiment attempts to grade based on similarities, while the second classifies the answers as either correct or incorrect. Five BERT models are tested in the system, and two additional sentence BERT (SBERT) and RoBERTa models are tested for the similarity problem. The best Pearson’s correlation for grading with similarities is obtained with the Tohoku BERT Base. The use of hiragana-kanji conversion improves the correlation to 0.615 for BERT and 0.593 for SBERT but does not show much improvement for RoBERTa. In the binary classification experiments, all models have an accuracy above 90%, with Tohoku BERT Large having the best performance. Even without fine-tuning, BERT can be used as an embedding method to perform binary classification with high accuracy.https://ieeexplore.ieee.org/document/10849551/Automated short answer gradingBERTSBERTdeep learningcontextual embeddings |
spellingShingle | Dyah Lalita Luhurkinanti Prima Dewi Purnamasari Takashi Tsunakawa Anak Agung Putri Ratna Japanese Short Answer Grading for Japanese Language Learners Using the Contextual Representation of BERT IEEE Access Automated short answer grading BERT SBERT deep learning contextual embeddings |
title | Japanese Short Answer Grading for Japanese Language Learners Using the Contextual Representation of BERT |
title_full | Japanese Short Answer Grading for Japanese Language Learners Using the Contextual Representation of BERT |
title_fullStr | Japanese Short Answer Grading for Japanese Language Learners Using the Contextual Representation of BERT |
title_full_unstemmed | Japanese Short Answer Grading for Japanese Language Learners Using the Contextual Representation of BERT |
title_short | Japanese Short Answer Grading for Japanese Language Learners Using the Contextual Representation of BERT |
title_sort | japanese short answer grading for japanese language learners using the contextual representation of bert |
topic | Automated short answer grading BERT SBERT deep learning contextual embeddings |
url | https://ieeexplore.ieee.org/document/10849551/ |
work_keys_str_mv | AT dyahlalitaluhurkinanti japaneseshortanswergradingforjapaneselanguagelearnersusingthecontextualrepresentationofbert AT primadewipurnamasari japaneseshortanswergradingforjapaneselanguagelearnersusingthecontextualrepresentationofbert AT takashitsunakawa japaneseshortanswergradingforjapaneselanguagelearnersusingthecontextualrepresentationofbert AT anakagungputriratna japaneseshortanswergradingforjapaneselanguagelearnersusingthecontextualrepresentationofbert |