scholarly journals Remaining Useful Life Based Maintenance Policy for Deteriorating Systems Subject to Continuous Degradation and Shock

Procedia CIRP ◽  
2018 ◽  
Vol 72 ◽  
pp. 1311-1315 ◽  
Author(s):  
Beikun Zhang ◽  
Liyun Xu ◽  
Yiping Chen ◽  
Aiping Li
Author(s):  
Youssef Maher ◽  
Boujemaa Danouj

Prognosis Health Monitoring (PHM) plays an increasingly important role in the management of machines and manufactured products in today’s industry, and deep learning plays an important part by establishing the optimal predictive maintenance policy. However, traditional learning methods such as unsupervised and supervised learning with standard architectures face numerous problems when exploiting existing data. Therefore, in this essay, we review the significant improvements in deep learning made by researchers over the last 3 years in solving these difficulties. We note that researchers are striving to achieve optimal performance in estimating the remaining useful life (RUL) of machine health by optimizing each step from data to predictive diagnostics. Specifically, we outline the challenges at each level with the type of improvement that has been made, and we feel that this is an opportunity to try to select a state-of-the-art architecture that incorporates these changes so each researcher can compare with his or her model. In addition, post-RUL reasoning and the use of distributed computing with cloud technology is presented, which will potentially improve the classification accuracy in maintenance activities. Deep learning will undoubtedly prove to have a major impact in upgrading companies at the lowest cost in the new industrial revolution, Industry 4.0.


2012 ◽  
Vol 45 (31) ◽  
pp. 66-72
Author(s):  
Phuc DO VAN ◽  
Eric LEVRAT ◽  
Alexandre VOISIN ◽  
Benoit IUNG

2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Hamed Khorasgani ◽  
Ahmed Farhat ◽  
Haiyan Wang ◽  
Chetan Gupta

Several machine learning and deep learning frameworks have been proposed to solve remaining useful life estimation and failure prediction problems in recent years. Having access to the remaining useful estimation or the likelihood of failure in the near future help operators to assess the operating conditions and, therefore, making better repair and maintenance decisions. However, many operators believe remaining useful life estimation and failure prediction solutions are incomplete answers to the maintenance challenge. They would argue that knowing the likelihood of failure in a given time interval or having access to an estimation of the remaining useful life are not enough to make maintenance decisions which minimize the cost while keeping them safe. In this paper, we present a maintenance framework based on off-line deep reinforcement learning which instead of providing information such as likelihood of failure, suggests actions such as “continue the operation” or “visit a repair shop” to the operators in order to maximize the overall profit. Using off-line reinforcement learning makes it possible to learn the optimum maintenance policy from historical data without relying on expensive simulators. We demonstrate the application of our solution in a case study using NASA C-MAPSS dataset.


2005 ◽  
Vol 48 (2) ◽  
pp. 208-217 ◽  
Author(s):  
Matthew Watson ◽  
Carl Byington ◽  
Douglas Edwards ◽  
Sanket Amin

Sign in / Sign up

Export Citation Format

Share Document