Probing the Pitfalls: Understanding SVD’s Shortcomings in Language Model Compression
Background: Modern computational linguistics heavily relies on large language models that demonstrate strong performance in various Natural Language Inference (NLI) tasks. These models, however, require substantial computational resources for both training and deployment. To address this challenge,...
Saved in:
Main Author: | Сергей Александрович Плетенев |
---|---|
Format: | Article |
Language: | English |
Published: |
National Research University Higher School of Economics
2024-12-01
|
Series: | Journal of Language and Education |
Subjects: | |
Online Access: | https://jle.hse.ru/article/view/22368 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
DCT and SVD Sparsity-Based Compressive Learning on Lettuces Classification
by: Lutvi Murdiansyah Murdiansyah, et al.
Published: (2024-12-01) -
Analysis of argument structure constructions in the large language model BERT
by: Pegah Ramezani, et al.
Published: (2025-01-01) -
Correlation of Periodontal Phenotype with Periodontal Probing Depth in Maxillary Anterior Teeth: A Cross-Sectional Study Using Probe Transparency Method
by: Maha Maqool, et al.
Published: (2024-12-01) -
BERTugues: A Novel BERT Transformer Model Pre-trained for Brazilian Portuguese
by: Ricardo Mazza Zago, et al.
Published: (2024-12-01) -
DESIGN OF THE CONTACT POTENTIALS DIFFERENCE PROBES
by: K. U. Pantsialeyeu, et al.
Published: (2016-06-01)