Larger models yield better results? Streamlined severity classification of ADHD-related concerns using BERT-based knowledge distillation.
This work focuses on the efficiency of the knowledge distillation approach in generating a lightweight yet powerful BERT-based model for natural language processing (NLP) applications. After the model creation, we applied the resulting model, LastBERT, to a real-world task-classifying severity level...
Saved in:
| Main Authors: | Ahmed Akib Jawad Karim, Kazi Hafiz Md Asad, Md Golam Rabiul Alam |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Public Library of Science (PLoS)
2025-01-01
|
| Series: | PLoS ONE |
| Online Access: | https://doi.org/10.1371/journal.pone.0315829 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Leveraging BERT, DistilBERT, and TinyBERT for Rumor Detection
by: Aijazahamed Qazi, et al.
Published: (2025-01-01) -
Autocorrelation Matrix Knowledge Distillation: A Task-Specific Distillation Method for BERT Models
by: Kai Zhang, et al.
Published: (2024-10-01) -
Optimised knowledge distillation for efficient social media emotion recognition using DistilBERT and ALBERT
by: Muhammad Hussain, et al.
Published: (2025-08-01) -
Design of Quality Gain-Loss Function with the Cubic Term Consideration for Larger-the-Better Characteristic and Smaller-the-Better Characteristic
by: Bo Wang, et al.
Published: (2025-02-01) -
Leveraging logit uncertainty for better knowledge distillation
by: Zhen Guo, et al.
Published: (2024-12-01)