Hybrid sequence learning with interpretability for multi-class quality prediction in injection molding

Ensuring consistent quality in injection molding remains a critical challenge due to dynamic process variations and the limitations of traditional rule-based inspection methods. This study proposes a novel hybrid deep learning framework that integrates a Transformer encoder with a TabNet classifier...

Full description

Saved in:
Bibliographic Details
Main Authors: Varathorn Punyangarm, Supatchaya Chotayakul
Format: Article
Language:English
Published: Elsevier 2025-09-01
Series:Results in Engineering
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2590123025024788
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Ensuring consistent quality in injection molding remains a critical challenge due to dynamic process variations and the limitations of traditional rule-based inspection methods. This study proposes a novel hybrid deep learning framework that integrates a Transformer encoder with a TabNet classifier to enable interpretable, multi-class defect prediction using time-series part weight data. The Transformer module captures long-range temporal dependencies, while TabNet provides feature-level interpretability through sparse attention masks. The model was trained and validated on real-world data from over 30,000 injection cycles, covering five classes: acceptable part, short shot, flash, sink mark, and warpage. Evaluation results demonstrate that the proposed model significantly outperforms conventional machine learning methods such as Random Forest, XGBoost, CatBoost, and a hybrid deep learning baseline (CNN–TabNet), achieving a macro F1-score of 0.964 and a macro-averaged area under the receiver operating characteristic curve (AUROC) of 0.992. It also maintains high robustness under signal noise and supports inference within 100 milliseconds, enabling near real-time deployment (i.e., high-speed analysis of recent production windows). Importantly, the model offers actionable insights through built-in explainability mechanisms, helping operators understand and trace the root causes of predicted defects. This research contributes a scalable, low-cost, and interpretable solution for proactive quality monitoring, paving the way for practical adoption of explainable AI in smart manufacturing environments.
ISSN:2590-1230