Activation-Guided Low-Rank Parameter Adaptation for Efficient Model Fine-Tuning
Fine-tuning large language models is computationally expensive, and while existing parameter-efficient methods like Low-Rank Adaptation (LoRA) reduce computational costs, they are limited by suboptimal initialization strategies. We introduce Activation-Guided LoRA (AG-LoRA), a novel approach that in...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10852296/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850141358464958464 |
|---|---|
| author | Qingchen Wang Shengyu Shen |
| author_facet | Qingchen Wang Shengyu Shen |
| author_sort | Qingchen Wang |
| collection | DOAJ |
| description | Fine-tuning large language models is computationally expensive, and while existing parameter-efficient methods like Low-Rank Adaptation (LoRA) reduce computational costs, they are limited by suboptimal initialization strategies. We introduce Activation-Guided LoRA (AG-LoRA), a novel approach that initializes LoRA modules using Singular Value Decomposition (SVD) guided by activation patterns. Our method employs pre-trained weights combined with activation-based weighting factors and implements a new global rank assignment strategy that accounts for activation outliers. Experimental evaluations on LLaMA and CLIP models show that AG-LoRA achieves superior performance while reducing GPU memory usage compared to existing methods. In tests with LLaMA 7B models, AG-LoRA reached 75.9% accuracy across various tasks, surpassing both LoRA and DoRA baselines. AG-LoRA demonstrates significant improvements in parameter-efficient fine-tuning of large language models, offering enhanced performance and reduced computational requirements. These advances make it a promising solution for efficient model adaptation across diverse applications. |
| format | Article |
| id | doaj-art-e5927d68407e4329b0854c0fac2bf388 |
| institution | OA Journals |
| issn | 2169-3536 |
| language | English |
| publishDate | 2025-01-01 |
| publisher | IEEE |
| record_format | Article |
| series | IEEE Access |
| spelling | doaj-art-e5927d68407e4329b0854c0fac2bf3882025-08-20T02:29:27ZengIEEEIEEE Access2169-35362025-01-0113709097091810.1109/ACCESS.2025.353370110852296Activation-Guided Low-Rank Parameter Adaptation for Efficient Model Fine-TuningQingchen Wang0https://orcid.org/0009-0002-9175-1944Shengyu Shen1https://orcid.org/0009-0009-8079-5176Zijin Research and Innovation Center, Nanjing Yunwen Network Technology Company Ltd., Nanjing, Jiangsu, ChinaZijin Research and Innovation Center, Nanjing Yunwen Network Technology Company Ltd., Nanjing, Jiangsu, ChinaFine-tuning large language models is computationally expensive, and while existing parameter-efficient methods like Low-Rank Adaptation (LoRA) reduce computational costs, they are limited by suboptimal initialization strategies. We introduce Activation-Guided LoRA (AG-LoRA), a novel approach that initializes LoRA modules using Singular Value Decomposition (SVD) guided by activation patterns. Our method employs pre-trained weights combined with activation-based weighting factors and implements a new global rank assignment strategy that accounts for activation outliers. Experimental evaluations on LLaMA and CLIP models show that AG-LoRA achieves superior performance while reducing GPU memory usage compared to existing methods. In tests with LLaMA 7B models, AG-LoRA reached 75.9% accuracy across various tasks, surpassing both LoRA and DoRA baselines. AG-LoRA demonstrates significant improvements in parameter-efficient fine-tuning of large language models, offering enhanced performance and reduced computational requirements. These advances make it a promising solution for efficient model adaptation across diverse applications.https://ieeexplore.ieee.org/document/10852296/Deep learninglow-rank adaptationparameter-efficient fine-tuning |
| spellingShingle | Qingchen Wang Shengyu Shen Activation-Guided Low-Rank Parameter Adaptation for Efficient Model Fine-Tuning IEEE Access Deep learning low-rank adaptation parameter-efficient fine-tuning |
| title | Activation-Guided Low-Rank Parameter Adaptation for Efficient Model Fine-Tuning |
| title_full | Activation-Guided Low-Rank Parameter Adaptation for Efficient Model Fine-Tuning |
| title_fullStr | Activation-Guided Low-Rank Parameter Adaptation for Efficient Model Fine-Tuning |
| title_full_unstemmed | Activation-Guided Low-Rank Parameter Adaptation for Efficient Model Fine-Tuning |
| title_short | Activation-Guided Low-Rank Parameter Adaptation for Efficient Model Fine-Tuning |
| title_sort | activation guided low rank parameter adaptation for efficient model fine tuning |
| topic | Deep learning low-rank adaptation parameter-efficient fine-tuning |
| url | https://ieeexplore.ieee.org/document/10852296/ |
| work_keys_str_mv | AT qingchenwang activationguidedlowrankparameteradaptationforefficientmodelfinetuning AT shengyushen activationguidedlowrankparameteradaptationforefficientmodelfinetuning |