Non‐Autoregressive Translation Algorithm Based on LLM Knowledge Distillation in English Corpus
ABSTRACT Although significant advancements have been made in the quality of machine translation by large‐scale language models, their high computational costs and resource consumption have hindered their widespread adoption in practical applications. So this research introduces an English corpus‐bas...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2025-01-01
|
Series: | Engineering Reports |
Subjects: | |
Online Access: | https://doi.org/10.1002/eng2.13077 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832576616186773504 |
---|---|
author | Fang Ju Weihui Wang |
author_facet | Fang Ju Weihui Wang |
author_sort | Fang Ju |
collection | DOAJ |
description | ABSTRACT Although significant advancements have been made in the quality of machine translation by large‐scale language models, their high computational costs and resource consumption have hindered their widespread adoption in practical applications. So this research introduces an English corpus‐based machine translation algorithm that leverages knowledge distillation from large language model, with the goal of enhancing translation quality and reducing the computational demands of the model. Initially, we conducted a thorough analysis of the English corpus to identify prevalent language patterns and structures. Following this, we developed a knowledge distillation approach that transfers the translation expertise of a large teacher model to a smaller student model, thereby achieving increased translation accuracy and efficiency. We designed a dynamic temperature hyperparameter distillation strategy that effectively enhances the precision of translations. In the experimental phase, we utilized several standard English corpora to train and assess our algorithm. The findings indicate that, compared to current machine translation systems, our method significantly reduces the need for computational resources while preserving translation quality. |
format | Article |
id | doaj-art-60b90cba29454a9eafaf25829f5e8730 |
institution | Kabale University |
issn | 2577-8196 |
language | English |
publishDate | 2025-01-01 |
publisher | Wiley |
record_format | Article |
series | Engineering Reports |
spelling | doaj-art-60b90cba29454a9eafaf25829f5e87302025-01-31T00:22:49ZengWileyEngineering Reports2577-81962025-01-0171n/an/a10.1002/eng2.13077Non‐Autoregressive Translation Algorithm Based on LLM Knowledge Distillation in English CorpusFang Ju0Weihui Wang1Department of Foreign Languages Jinzhong University Jinzhong ChinaSchool of Computer Science Sichuan University Jinjiang College Meishan ChinaABSTRACT Although significant advancements have been made in the quality of machine translation by large‐scale language models, their high computational costs and resource consumption have hindered their widespread adoption in practical applications. So this research introduces an English corpus‐based machine translation algorithm that leverages knowledge distillation from large language model, with the goal of enhancing translation quality and reducing the computational demands of the model. Initially, we conducted a thorough analysis of the English corpus to identify prevalent language patterns and structures. Following this, we developed a knowledge distillation approach that transfers the translation expertise of a large teacher model to a smaller student model, thereby achieving increased translation accuracy and efficiency. We designed a dynamic temperature hyperparameter distillation strategy that effectively enhances the precision of translations. In the experimental phase, we utilized several standard English corpora to train and assess our algorithm. The findings indicate that, compared to current machine translation systems, our method significantly reduces the need for computational resources while preserving translation quality.https://doi.org/10.1002/eng2.13077English corpusknowledge distillationlarge language modelmachine translation |
spellingShingle | Fang Ju Weihui Wang Non‐Autoregressive Translation Algorithm Based on LLM Knowledge Distillation in English Corpus Engineering Reports English corpus knowledge distillation large language model machine translation |
title | Non‐Autoregressive Translation Algorithm Based on LLM Knowledge Distillation in English Corpus |
title_full | Non‐Autoregressive Translation Algorithm Based on LLM Knowledge Distillation in English Corpus |
title_fullStr | Non‐Autoregressive Translation Algorithm Based on LLM Knowledge Distillation in English Corpus |
title_full_unstemmed | Non‐Autoregressive Translation Algorithm Based on LLM Knowledge Distillation in English Corpus |
title_short | Non‐Autoregressive Translation Algorithm Based on LLM Knowledge Distillation in English Corpus |
title_sort | non autoregressive translation algorithm based on llm knowledge distillation in english corpus |
topic | English corpus knowledge distillation large language model machine translation |
url | https://doi.org/10.1002/eng2.13077 |
work_keys_str_mv | AT fangju nonautoregressivetranslationalgorithmbasedonllmknowledgedistillationinenglishcorpus AT weihuiwang nonautoregressivetranslationalgorithmbasedonllmknowledgedistillationinenglishcorpus |