The fine art of fine-tuning: A structured review of advanced LLM fine-tuning techniques

Transformer-based models have consistently demonstrated superior accuracy compared to various traditional models across a range of downstream tasks. However, due to their large nature, training or fine-tuning them for specific tasks has heavy computational and memory demands. This causes the creatio...

Full description

Saved in:
Bibliographic Details
Main Authors: Samar Pratap, Alston Richard Aranha, Divyanshu Kumar, Gautam Malhotra, Anantharaman Palacode Narayana Iyer, Shylaja S.S.
Format: Article
Language:English
Published: Elsevier 2025-06-01
Series:Natural Language Processing Journal
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2949719125000202
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Transformer-based models have consistently demonstrated superior accuracy compared to various traditional models across a range of downstream tasks. However, due to their large nature, training or fine-tuning them for specific tasks has heavy computational and memory demands. This causes the creation of specialized transformer-based models to be almost impossible in the generally present constrained scenarios. To tackle this issue and to make these large models more accessible, a plethora of techniques have been developed. In this study, we will be reviewing the types of techniques developed, their impacts and benefits concerning performance and resource usage along with the latest developments in the domain. We have broadly categorized these techniques into six key areas: Changes in Training Method, Changes in Adapter, Quantization, Parameter Selection, Mixture of Experts, and Application based methods. We collated the results of various techniques on common benchmarks and also evaluated their performance on different datasets and base models.
ISSN:2949-7191