Toward Generating Quality Test Questions and Answers Using Quantized Low-Rank Adapters in LLMs
Traditional approaches to question-and-answer generation are resource-intensive, necessitating innovative automation techniques to address these challenges. To this end, we propose fine-tuning strategies based on Quantized Low-Rank Adaptation (QLoRA), utilizing a meticulously curated domain-specific...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11005578/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Traditional approaches to question-and-answer generation are resource-intensive, necessitating innovative automation techniques to address these challenges. To this end, we propose fine-tuning strategies based on Quantized Low-Rank Adaptation (QLoRA), utilizing a meticulously curated domain-specific dataset. As a case study, we focused on the Korea College Scholastic Ability Test (KCSAT), introducing the KCSAT-ENG dataset, which comprises questions and answers from real and mock tests. Our approach involved fine-tuning the LLaMA-3-8B-Instruct model to generate questions and answers for 22 distinct task types. A key innovation of our work is the deployment of separate models for question-and-answer generation, leveraging a cross-verification process to enhance accuracy. To evaluate the QLoRA technique, we conducted extensive experiments by varying quantization, rank, and alpha values. The results highlighted optimal configurations: the question generation model performed best with <inline-formula> <tex-math notation="LaTeX">$rank, \alpha = 32, 8$ </tex-math></inline-formula> respectively without quantization, while the answer generation model achieved optimal results with <inline-formula> <tex-math notation="LaTeX">$rank, \alpha = 64, 16$ </tex-math></inline-formula> respectively. Compared to a non-fine-tuned LLaMA-3-8B-Instruct model, our question generation model demonstrated a 41.5% improvement, and the answer generation model achieved a 16.1% improvement. These findings underscore the potential of QLoRA-based fine-tuning in creating accurate, cost-effective, and scalable automated educational tools. |
|---|---|
| ISSN: | 2169-3536 |