Activation-Guided Low-Rank Parameter Adaptation for Efficient Model Fine-Tuning

Fine-tuning large language models is computationally expensive, and while existing parameter-efficient methods like Low-Rank Adaptation (LoRA) reduce computational costs, they are limited by suboptimal initialization strategies. We introduce Activation-Guided LoRA (AG-LoRA), a novel approach that in...

Full description

Saved in:
Bibliographic Details
Main Authors: Qingchen Wang, Shengyu Shen
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10852296/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Fine-tuning large language models is computationally expensive, and while existing parameter-efficient methods like Low-Rank Adaptation (LoRA) reduce computational costs, they are limited by suboptimal initialization strategies. We introduce Activation-Guided LoRA (AG-LoRA), a novel approach that initializes LoRA modules using Singular Value Decomposition (SVD) guided by activation patterns. Our method employs pre-trained weights combined with activation-based weighting factors and implements a new global rank assignment strategy that accounts for activation outliers. Experimental evaluations on LLaMA and CLIP models show that AG-LoRA achieves superior performance while reducing GPU memory usage compared to existing methods. In tests with LLaMA 7B models, AG-LoRA reached 75.9% accuracy across various tasks, surpassing both LoRA and DoRA baselines. AG-LoRA demonstrates significant improvements in parameter-efficient fine-tuning of large language models, offering enhanced performance and reduced computational requirements. These advances make it a promising solution for efficient model adaptation across diverse applications.
ISSN:2169-3536