Mitigating Quantization Errors Due to Activation Spikes in Gated Linear Unit-Based Large Language Models
Modern large language models (LLMs) achieve state-of-the-art performance through architectural advancements but require high computational costs for inference. Post-training quantization is a widely adopted approach to reduce these costs by quantizing weights and activations to lower precision, such...
Saved in:
| Main Authors: | Jaewoo Yang, Hayun Kim, Junyung Ji, Younghoon Kim |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-04-01
|
| Series: | Future Internet |
| Subjects: | |
| Online Access: | https://www.mdpi.com/1999-5903/17/4/185 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Quantization for a Condensation System
by: Shivam Dubey, et al.
Published: (2025-04-01) -
Enhanced Vector Quantization for Embedded Machine Learning: A Post-Training Approach With Incremental Clustering
by: Thommas K. S. Flores, et al.
Published: (2025-01-01) -
Conditional Quantization for Uniform Distributions on Line Segments and Regular Polygons
by: Pigar Biteng, et al.
Published: (2025-03-01) -
Addressing Activation Outliers in LLMs: A Systematic Review of Post-Training Quantization Techniques
by: Patrik Czako, et al.
Published: (2025-01-01) -
An interpolated quantized guard band algorithm for physical layer key generation
by: Yongli An, et al.
Published: (2025-03-01)