Enhancing LoRA Model Serving Capacity via Adaptive Operator Scheduling for Multi-Tenancy on GPU
Low-Rank Adaptation (LoRA) has garnered increasing attention for effectively fine-tuning large language models (LLMs) with limited resources. Nonetheless, conventional approaches that cater to multiple LoRA models independently lead to redundant computations and suboptimal GPU utilization. This stud...
Saved in:
| Main Authors: | Lingnan Xia, Hua Ma |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2024-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10721583/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Enhancing Diffusion-Based Music Generation Performance with LoRA
by: Seonpyo Kim, et al.
Published: (2025-08-01) -
LoRA fine-tuning of Llama3 large model for intelligent fishery field
by: Yao Song, et al.
Published: (2025-07-01) -
Investigating translation for Indic languages with BLOOMZ-3b through prompting and LoRA fine-tuning
by: Aarathi Rajagopalan Nair, et al.
Published: (2024-10-01) -
Toward Low-Resource Languages Machine Translation: A Language-Specific Fine-Tuning With LoRA for Specialized Large Language Models
by: Xiao Liang, et al.
Published: (2025-01-01) -
Specific features of succession of a share in the right to joint tenancy
by: O. Ye. Kukhariev
Published: (2024-12-01)