The fine art of fine-tuning: A structured review of advanced LLM fine-tuning techniques
Transformer-based models have consistently demonstrated superior accuracy compared to various traditional models across a range of downstream tasks. However, due to their large nature, training or fine-tuning them for specific tasks has heavy computational and memory demands. This causes the creatio...
Saved in:
| Main Authors: | Samar Pratap, Alston Richard Aranha, Divyanshu Kumar, Gautam Malhotra, Anantharaman Palacode Narayana Iyer, Shylaja S.S. |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Elsevier
2025-06-01
|
| Series: | Natural Language Processing Journal |
| Subjects: | |
| Online Access: | http://www.sciencedirect.com/science/article/pii/S2949719125000202 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
A Comprehensive Overview and Analysis of Large Language Models: Trends and Challenges
by: Ammar Mohammed, et al.
Published: (2025-01-01) -
LoRA fine-tuning of Llama3 large model for intelligent fishery field
by: Yao Song, et al.
Published: (2025-07-01) -
Fine-Tuning BiomedBERT with LoRA and Pseudo-Labeling for Accurate Drug–Drug Interactions Classification
by: Ioan-Flaviu Gheorghita, et al.
Published: (2025-08-01) -
GLR: Graph Chain-of-Thought with LoRA Fine-Tuning and Confidence Ranking for Knowledge Graph Completion
by: Yifei Chen, et al.
Published: (2025-06-01) -
Enhancing Diffusion-Based Music Generation Performance with LoRA
by: Seonpyo Kim, et al.
Published: (2025-08-01)