MDCNN: Multi-Teacher Distillation-Based CNN for News Text Classification

News text classification is crucial for efficient information acquisition and dissemination. While deep learning models, such as BERT and BiGRU, excel in accuracy for text classification, their high complexity and resource demands hinder practical deployment. To address these challenges, we proposed...

Full description

Saved in:
Bibliographic Details
Main Authors: Xiaolei Guo, Qingyang Liu, Yanrong Hu, Hongjiu Liu
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10943179/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:News text classification is crucial for efficient information acquisition and dissemination. While deep learning models, such as BERT and BiGRU, excel in accuracy for text classification, their high complexity and resource demands hinder practical deployment. To address these challenges, we proposed MDCNN (Multi-teacher Distillation-based CNN), which leverages knowledge distillation with BERT and BiGRU as teacher models and TextCNN as the student model. Experiments on three benchmark news dataset demonstrate that MDCNN improves classification accuracy by nearly 2% while significantly reducing computational overhead, offering a practical solution for real-world applications.
ISSN:2169-3536