Noise Reduction in CWRU Data Using DAE and Classification with ViT

With the Fourth Industrial Revolution unfolding worldwide, technologies including the Internet of Things, sensors, and artificial intelligence are undergoing rapid development. These technological advancements have played a significant role in the dramatic growth of the predictive maintenance market...

Full description

Saved in:
Bibliographic Details
Main Authors: Jun-gyo Jang, Soon-sup Lee, Se-yun Hwang, Jae-chul Lee
Format: Article
Language:English
Published: MDPI AG 2024-12-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/14/24/11771
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:With the Fourth Industrial Revolution unfolding worldwide, technologies including the Internet of Things, sensors, and artificial intelligence are undergoing rapid development. These technological advancements have played a significant role in the dramatic growth of the predictive maintenance market for mechanical equipment, prompting active research on noise removal techniques and classification algorithms for the accurate determination of the causes of equipment failure. In this study, time series data were preprocessed using the denoising autoencoder technique, a deep learning-based noise removal method, to improve the accuracy of failure classification from mechanical equipment data. To convert the preprocessed time series data into frequency components, the short-time Fourier transform technique was employed. The fault types of mechanical equipment were classified using the vision transformer (ViT) technique, a deep learning technique that has been actively used in recent image analysis research. Additionally, the classification performance of the ViT-based technique for vibration time series data was comparatively validated against existing classification algorithms. The accuracy of failure classification was the highest when the data, preprocessed using a Denoising Autoencoder (DAE), were classified by a Vision Transformer (ViT).
ISSN:2076-3417