A General Technique to Train Language Models on Language Models
Saved in:
| Main Author: | Mark-Jan Nederhof |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
The MIT Press
2021-03-01
|
| Series: | Computational Linguistics |
| Online Access: | http://dx.doi.org/10.1162/0891201054223986 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Practical Experiments with Regular Approximation of Context-Free Languages
by: Mark-Jan Nederhof
Published: (2021-03-01) -
Application of distributed techniques in large language model training and inference
by: ZHENG Weimin
Published: (2024-09-01) -
Adapting vs. Pre-training Language Models for Historical Languages
by: Enrique Manjavacas, et al.
Published: (2022-06-01) -
Explainable Pre-Trained Language Models for Sentiment Analysis in Low-Resourced Languages
by: Koena Ronny Mabokela, et al.
Published: (2024-11-01) -
Detection avoidance techniques for large language models
by: Sinclair Schneider, et al.
Published: (2025-01-01)