Toward Low-Resource Languages Machine Translation: A Language-Specific Fine-Tuning With LoRA for Specialized Large Language Models
In the field of computational linguistics, addressing machine translation (MT) challenges for low-resource languages remains crucial, as these languages often lack extensive data compared to high-resource languages. General large language models (LLMs), such as GPT-4 and Llama, primarily trained on...
Saved in:
| Main Authors: | Xiao Liang, Yen-Min Jasmina Khaw, Soung-Yue Liew, Tien-Ping Tan, Donghong Qin |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10918960/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
LoRA fine-tuning of Llama3 large model for intelligent fishery field
by: Yao Song, et al.
Published: (2025-07-01) -
GLR: Graph Chain-of-Thought with LoRA Fine-Tuning and Confidence Ranking for Knowledge Graph Completion
by: Yifei Chen, et al.
Published: (2025-06-01) -
A Comprehensive Overview and Analysis of Large Language Models: Trends and Challenges
by: Ammar Mohammed, et al.
Published: (2025-01-01) -
LoRaBB: An Algorithm for Parameter Selection in LoRa-Based Communication for the Amazon Rainforest
by: Diogo Soares Moreira, et al.
Published: (2025-02-01) -
LoRA Fusion: Enhancing Image Generation
by: Dooho Choi, et al.
Published: (2024-11-01)