Lightweight Pre-Trained Korean Language Model Based on Knowledge Distillation and Low-Rank Factorization
Natural Language Processing (NLP) stands as a forefront of artificial intelligence research, empowering computational systems to comprehend and process human language as used in everyday contexts. Language models (LMs) underpin this field, striving to capture the intricacies of linguistic structure...
Saved in:
| Main Authors: | Jin-Hwan Kim, Young-Seok Choi |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-04-01
|
| Series: | Entropy |
| Subjects: | |
| Online Access: | https://www.mdpi.com/1099-4300/27/4/379 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Language Policy and Regional Varieties of the Korean Language
by: Vladislava Mazana
Published: (2025-06-01) -
SINO-KOREAN VOCABULARY LEARNING STRATEGIES AMONG INDONESIAN KOREAN LANGUAGE LEARNERS
by: Alfiani Rahmi Chandraswara, et al.
Published: (2025-06-01) -
A critical analysis of Korean culture represented in the Korean language textbooks developed in Thailand
by: Ki Young Choi, et al.
Published: (2024-12-01) -
M3AE-Distill: An Efficient Distilled Model for Medical Vision–Language Downstream Tasks
by: Xudong Liang, et al.
Published: (2025-07-01) -
A Comparative Study on Syntactic Morphological Similarities and Differences between Korean and Persian Languages
by: Farhad Khabazian
Published: (2023-06-01)