Digital Twin-driven Remaining Useful Life Prediction for Gear Performance Degradation: A Review

Author(s):  
Bin He ◽  
Long Liu ◽  
Dong Zhang

Abstract As a transmission component, the gear has been obtained widespread attention. The remaining useful life (RUL) prediction of gear is critical to the prognostics health management (PHM) of gear transmission systems. The digital twin (DT) provides support for gear RUL prediction with the advantages of rich health information data and accurate health indicators (HI). This paper reviews digital twin-driven RUL prediction methods for gear performance degradation, from the view of digital twin-driven physical model-based and virtual model-based prediction method. From the view of physical model-based one, it includes prediction model based on gear crack, gear fatigue, gear surface scratch, gear tooth breakage, and gear permanent deformation. From the view of digital twin-driven virtual model-based one, it includes non-deep learning methods, and deep learning methods. Non-deep learning methods include wiener process, gamma process, hidden Markov model (HMM), regression-based model, proportional hazard model. Deep learning methods include deep neural networks (DNN), deep belief networks (DBN), convolutional neural networks (CNN), and recurrent neural networks (RNN), etc. It mainly summarizes the performance degradation and life test of various models in gear, and evaluates the advantages and disadvantages of various methods. In addition, it encourages future works.

2021 ◽  
Vol 7 ◽  
pp. 5562-5574 ◽  
Author(s):  
Shunli Wang ◽  
Siyu Jin ◽  
Dekui Bai ◽  
Yongcun Fan ◽  
Haotian Shi ◽  
...  

Electronics ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 39
Author(s):  
Zhiyuan Xie ◽  
Shichang Du ◽  
Jun Lv ◽  
Yafei Deng ◽  
Shiyao Jia

Remaining Useful Life (RUL) prediction is significant in indicating the health status of the sophisticated equipment, and it requires historical data because of its complexity. The number and complexity of such environmental parameters as vibration and temperature can cause non-linear states of data, making prediction tremendously difficult. Conventional machine learning models such as support vector machine (SVM), random forest, and back propagation neural network (BPNN), however, have limited capacity to predict accurately. In this paper, a two-phase deep-learning-model attention-convolutional forget-gate recurrent network (AM-ConvFGRNET) for RUL prediction is proposed. The first phase, forget-gate convolutional recurrent network (ConvFGRNET) is proposed based on a one-dimensional analog long short-term memory (LSTM), which removes all the gates except the forget gate and uses chrono-initialized biases. The second phase is the attention mechanism, which ensures the model to extract more specific features for generating an output, compensating the drawbacks of the FGRNET that it is a black box model and improving the interpretability. The performance and effectiveness of AM-ConvFGRNET for RUL prediction is validated by comparing it with other machine learning methods and deep learning methods on the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) dataset and a dataset of ball screw experiment.


Author(s):  
Andrés Ruiz-Tagle Palazuelos ◽  
Enrique López Droguett ◽  
Rodrigo Pascual

With the availability of cheaper multi-sensor systems, one has access to massive and multi-dimensional sensor data for fault diagnostics and prognostics. However, from a time, engineering and computational perspective, it is often cost prohibitive to manually extract useful features and to label all the data. To address these challenges, deep learning techniques have been used in the recent years. Within these, convolutional neural networks have shown remarkable performance in fault diagnostics and prognostics. However, this model present limitations from a prognostics and health management perspective: to improve its feature extraction generalization capabilities and reduce computation time, ill-based pooling operations are employed, which require sub-sampling of the data, thus loosing potentially valuable information regarding an asset’s degradation process. Capsule neural networks have been recently proposed to address these problems with strong results in computer vision–related classification tasks. This has motivated us to extend capsule neural networks for fault prognostics and, in particular, remaining useful life estimation. The proposed model, architecture and algorithm are tested and compared to other state-of-the art deep learning models on the benchmark Commercial Modular Aero Propulsion System Simulation turbofans data set. The results indicate that the proposed capsule neural networks are a promising approach for remaining useful life prognostics from multi-dimensional sensor data.


Author(s):  
Ikram Remadna ◽  
Labib Sadek Terrissa ◽  
Soheyb Ayad ◽  
Nourddine Zerhouni

The turbofan engine is one of the most critical aircraft components. Its failure may introduce unwanted downtime, expensive repair, and affect safety performance. Therefore, It is essential to accurately detect upcoming failures by predicting the future behavior health state of turbofan engines as well as its Remaining Useful Life. The use of deep learning techniques to estimate Remaining Useful Life has seen a growing interest over the last decade. However, hybrid deep learning methods have not been sufficiently explored yet by researchers.In this paper, we proposed two-hybrid methods combining Convolutional Auto-encoder (CAE), Bi-directional Gated Recurrent Unit (BDGRU), Bi-directional Long-Short Term Memory (BDLSTM), and Convolutional Neural Network (CNN) to enhance the RUL estimation. The results indicate that the hybrid methods exhibit the most reliable RUL prediction accuracy and significantly outperform the most robust predictions in the literature.


Author(s):  
Marco Star ◽  
Kristoffer McKee

Data-driven machinery prognostics has seen increasing popularity recently, especially with the effectiveness of deep learning methods growing. However, deep learning methods lack useful properties such as the lack of uncertainty quantification of their outputs and have a black-box nature. Neural ordinary differential equations (NODEs) use neural networks to define differential equations that propagate data from the inputs to the outputs. They can be seen as a continuous generalization of a popular network architecture used for image recognition known as the Residual Network (ResNet). This paper compares the performance of each network for machinery prognostics tasks to show the validity of Neural ODEs in machinery prognostics. The comparison is done using NASA’s Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) dataset, which simulates the sensor information of degrading turbofan engines. To compare both architectures, they are set up as convolutional neural networks and the sensors are transformed to the time-frequency domain through the short-time Fourier transform (STFT). The spectrograms from the STFT are the input images to the networks and the output is the estimated RUL; hence, the task is turned into an image recognition task. The results found NODEs can compete with state-of-the-art machinery prognostics methods. While it does not beat the state-of-the-art method, it is close enough that it could warrant further research into using NODEs. The potential benefits of using NODEs instead of other network architectures are also discussed in this work.


Author(s):  
Junchuan Shi ◽  
Tianyu Yu ◽  
Kai Goebel ◽  
Dazhong Wu

Abstract Prognostics and health management (PHM) of bearings is crucial for reducing the risk of failure and the cost of maintenance for rotating machinery. Model-based prognostic methods develop closed-form mathematical models based on underlying physics. However, the physics of complex bearing failures under varying operating conditions is not well understood yet. To complement model-based prognostics, data-driven methods have been increasingly used to predict the remaining useful life (RUL) of bearings. As opposed to other machine learning methods, ensemble learning methods can achieve higher prediction accuracy by combining multiple learning algorithms of different types. The rationale behind ensemble learning is that higher performance can be achieved by combining base learners that overestimate and underestimate the RUL of bearings. However, building an effective ensemble remains a challenge. To address this issue, the impact of diversity in base learners and extracted features in different degradation stages on the performance of ensemble learning is investigated. The degradation process of bearings is classified into three stages, including normal wear, smooth wear, and severe wear, based on the root-mean-square (RMS) of vibration signals. To evaluate the impact of diversity on prediction performance, vibration data collected from rolling element bearings was used to train predictive models. Experimental results have shown that the performance of the proposed ensemble learning method is significantly improved by selecting diverse features and base learners in different degradation stages.


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Jian Ma ◽  
Hua Su ◽  
Wan-lin Zhao ◽  
Bin Liu

Because they are key components of aircraft, improving the safety, reliability and economy of engines is crucial. To ensure flight safety and reduce the cost of maintenance during aircraft engine operation, a prognostics and health management system that focuses on fault diagnosis, health assessment, and life prediction is introduced to solve the problems. Predicting the remaining useful life (RUL) is the most important information for making decisions about aircraft engine operation and maintenance, and it relies largely on the selection of performance degradation features. The choice of such features is highly significant, but there are some weaknesses in the current algorithm for RUL prediction, notably, the inability to obtain tendencies from the data. Especially with aircraft engines, extracting useful degradation features from multisensor data with complex correlations is a key technical problem that has hindered the implementation of degradation assessment. To solve these problems, deep learning has been proposed in recent years to exploit multiple layers of nonlinear information processing for unsupervised self-learning of features. This paper presents a deep learning approach to predict the RUL of an aircraft engine based on a stacked sparse autoencoder and logistic regression. The stacked sparse autoencoder is used to automatically extract performance degradation features from multiple sensors on the aircraft engine and to fuse multiple features through multilayer self-learning. Logistic regression is used to predict the remaining useful life. However, the hyperparameters of the deep learning, which significantly impact the feature extraction and prediction performance, are determined based on expert experience in most cases. The grid search method is introduced in this paper to optimize the hyperparameters of the proposed aircraft engine RUL prediction model. An application of this method of predicting the RUL of an aircraft engine with a benchmark dataset is employed to demonstrate the effectiveness of the proposed approach.


Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 223
Author(s):  
Yen-Ling Tai ◽  
Shin-Jhe Huang ◽  
Chien-Chang Chen ◽  
Henry Horng-Shing Lu

Nowadays, deep learning methods with high structural complexity and flexibility inevitably lean on the computational capability of the hardware. A platform with high-performance GPUs and large amounts of memory could support neural networks having large numbers of layers and kernels. However, naively pursuing high-cost hardware would probably drag the technical development of deep learning methods. In the article, we thus establish a new preprocessing method to reduce the computational complexity of the neural networks. Inspired by the band theory of solids in physics, we map the image space into a noninteraction physical system isomorphically and then treat image voxels as particle-like clusters. Then, we reconstruct the Fermi–Dirac distribution to be a correction function for the normalization of the voxel intensity and as a filter of insignificant cluster components. The filtered clusters at the circumstance can delineate the morphological heterogeneity of the image voxels. We used the BraTS 2019 datasets and the dimensional fusion U-net for the algorithmic validation, and the proposed Fermi–Dirac correction function exhibited comparable performance to other employed preprocessing methods. By comparing to the conventional z-score normalization function and the Gamma correction function, the proposed algorithm can save at least 38% of computational time cost under a low-cost hardware architecture. Even though the correction function of global histogram equalization has the lowest computational time among the employed correction functions, the proposed Fermi–Dirac correction function exhibits better capabilities of image augmentation and segmentation.


Sign in / Sign up

Export Citation Format

Share Document