A Remaining Useful Life Prediction Method for Rolling Bearings Based on Hierarchical Clustering and Transformer–GRU

In the prediction of the remaining useful life (RUL) of rolling bearings, feature extraction and selection are critical prerequisites for accurate prediction, while the construction of the prediction model is the core. However, existing RUL prediction methods face two main challenges: (1) feature co...

Full description

Saved in:
Bibliographic Details
Main Authors: Wenping Lei, Xing Dong, Fuyuan Cui, Guangzhong Huang
Format: Article
Language:English
Published: MDPI AG 2025-05-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/15/10/5369
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In the prediction of the remaining useful life (RUL) of rolling bearings, feature extraction and selection are critical prerequisites for accurate prediction, while the construction of the prediction model is the core. However, existing RUL prediction methods face two main challenges: (1) feature construction methods based on predefined indicators often ignore the correlation among features; and (2) single models typically yield limited prediction accuracy. To address these issues, this study proposes a feature selection method based on hierarchical clustering combined with the elbow method and a hybrid Transformer–GRU (Gated Recurrent Unit) model for RUL prediction. Specifically, the initially filtered feature set is further clustered using hierarchical clustering, and the optimal number of clusters is determined by the elbow method to construct a compact and representative feature set. This feature set is then input into a Transformer–GRU model, where the Transformer encoder captures temporal dependencies across time steps to generate rich feature representations, and the GRU network models their dynamic evolution over time to predict the bearing RUL. The proposed method is validated on the PHM2012 dataset. The experimental results show that after removing redundant features, the model’s training time is reduced by 8.61% and the number of parameters decreases by 23.26%. Compared with other benchmark models, the proposed Transformer–GRU model achieves a lower mean absolute error (MAE) of 0.0836 and a root mean square error (RMSE) of 0.1137, demonstrating superior predictive performance. These results confirm that the proposed feature selection method effectively eliminates feature redundancy, enhances training efficiency, and reduces model complexity, while the hybrid model significantly improves prediction accuracy.
ISSN:2076-3417