Activation-Guided Low-Rank Parameter Adaptation for Efficient Model Fine-Tuning
Fine-tuning large language models is computationally expensive, and while existing parameter-efficient methods like Low-Rank Adaptation (LoRA) reduce computational costs, they are limited by suboptimal initialization strategies. We introduce Activation-Guided LoRA (AG-LoRA), a novel approach that in...
Saved in:
| Main Authors: | Qingchen Wang, Shengyu Shen |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10852296/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Leveraging Low-Rank Adaptation for Parameter-Efficient Fine-Tuning in Multi-Speaker Adaptive Text-to-Speech Synthesis
by: Changi Hong, et al.
Published: (2024-01-01) -
Cross-domain subcortical brain structure segmentation algorithm based on low-rank adaptation fine-tuning SAM
by: Yuan Sui, et al.
Published: (2025-07-01) -
Deepfake Detection Method Integrating Multiple Parameter-Efficient Fine-Tuning Techniques
by: ZHANG Yiwen, CAI Manchun, CHEN Yonghao, ZHU Yi, YAO Lifeng
Published: (2024-12-01) -
Low-Rank Adaptation of Pre-Trained Large Vision Models for Improved Lung Nodule Malignancy Classification
by: Benjamin P. Veasey, et al.
Published: (2025-01-01) -
A new low-rank adaptation method for brain structure and metastasis segmentation via decoupled principal weight direction and magnitude
by: Hancan Zhu, et al.
Published: (2025-07-01)