scholarly journals Explainable Artificial Intelligence for Anomaly Detection and Prognostic of Gas Turbines using Uncertainty Quantification with Sensor-Related Data

Author(s):  
Ahmad Kamal Mohd Nor ◽  
Srinivasa Rao Pedapati ◽  
Masdi Muhammad ◽  
Víctor Leiva

Explainable artificial intelligence (XAI) is in its assimilation phase in the prognostic and health management (PHM). The literature on PHM-XAI is deficient with respect to metrics of uncertainty quantification and explanation evaluation. This paper proposes a new method of anomaly detection and prognostic for gas turbines using Bayesian deep learning and Shapley additive explanations (SHAP). The method explains the anomaly detection and prognostic and improves the performance of the prognostic, aspects that have not been considered in the literature of PHM-XAI. The uncertainty measures considered serve to broaden explanation scope and can also be exploited as anomaly indicators. Real-world gas turbine sensor-related data are tested for the anomaly detection, while NASA commercial modular aero-propulsion system simulation data, related to turbofan sensors, were used for prognostic. The generated explanation is evaluated using two metrics: consistency and local accuracy. All anomalies were successfully detected using the uncertainty indicators. Meanwhile, the turbofan prognostic results showed up to 9% improvement in root mean square error and 43% enhancement in early prognostic due to the SHAP, making it comparable to the best existing methods. The XAI and uncertainty quantification offer a comprehensive explanation for assisting decision-making. Additionally, the SHAP ability to increase PHM performance confirms its value in AI-based reliability research.

Author(s):  
Ahmad Kamal Mohd Nor ◽  
Srinivasa Rao Pedapati ◽  
Masdi Muhammad

XAI is presently in its early assimilation phase in Prognostic and Health Management (PHM) domain. However, the handful of PHM-XAI articles suffer from various deficiencies, amongst others, lack of uncertainty quantification and explanation evaluation metric. This paper proposes an anomaly detection and prognostic of gas turbines using Bayesian deep learning (DL) model with SHapley Additive exPlanations (SHAP). SHAP was not only applied to explain both tasks, but also to improve the prognostic performance, the latter trait being left undocumented in the previous PHM-XAI works. Uncertainty measure serves to broaden explanation scope and was also exploited as anomaly indicator. Real gas turbine data was tested for the anomaly detection task while NASA CMAPSS turbofan datasets were used for prognostic. The generated explanation was evaluated using two metrics: Local Accuracy and Consistency. All anomalies were successfully detected thanks to the uncertainty indicator. Meanwhile, the turbofan prognostic results show up to 9% improvement in RMSE and 43% enhancement in early prognostic due to SHAP, making it comparable to the best published methods in the problem. XAI and uncertainty quantification offer a comprehensive explanation package, assisting decision making. Additionally, SHAP ability in boosting PHM performance solidifies its worth in AI-based reliability research.


Author(s):  
Ningbo Zhao ◽  
Xueyou Wen ◽  
Shuying Li

With the rapid improvement of equipment manufacturing technology and the ever increasing cost of fuel, engine health management has become one of the most important parts of aeroengine, industrial and marine gas turbine. As an effective technology for improving the engine availability and reducing the maintenance costs, anomaly detection has attracted great attention. In the past decades, different methods including gas path analysis, on-line monitoring or off-line analysis of vibration signal, oil and electrostatic monitoring have been developed. However, considering the complexity of structure and the variability of working environments for engine, many important problems such as the accurate modeling of gas turbine with different environment, the selection of sensors, the optimization of various data-driven approach and the fusion strategy of multi-source information still need to be solved urgently. Besides, although a large number of investigations in this area are reported every year in various journals and conference proceedings, most of them are about aeroengine or industrial gas turbine and limited literature is published about marine gas turbine. Based on this background, this paper attempts to summarize the recent developments in health management of gas turbines. For the increasing requirement of predict-and-prevent maintenance, the typical anomaly detection technologies are analyzed in detail. In addition, according to the application characteristics of marine gas turbine, this paper introduces a brief prospect on the possible challenges of anomaly detection, which may provide beneficial references for the implementing and development of marine gas turbine health management.


2020 ◽  
Author(s):  
Maria Moreno de Castro

<p>The presence of automated decision making continuously increases in today's society. Algorithms based in machine and deep learning decide how much we pay for insurance,  translate our thoughts to speech, and shape our consumption of goods (via e-marketing) and knowledge (via search engines). Machine and deep learning models are ubiquitous in science too, in particular, many promising examples are being developed to prove their feasibility for earth sciences applications, like finding temporal trends or spatial patterns in data or improving parameterization schemes for climate simulations. </p><p>However, most machine and deep learning applications aim to optimise performance metrics (for instance, accuracy, which stands for the times the model prediction was right), which are rarely good indicators of trust (i.e., why these predictions were right?). In fact, with the increase of data volume and model complexity, machine learning and deep learning  predictions can be very accurate but also prone to rely on spurious correlations, encode and magnify bias, and draw conclusions that do not incorporate the underlying dynamics governing the system. Because of that, the uncertainty of the predictions and our confidence in the model are difficult to estimate and the relation between inputs and outputs becomes hard to interpret. </p><p>Since it is challenging to shift a community from “black” to “glass” boxes, it is more useful to implement Explainable Artificial Intelligence (XAI) techniques right at the beginning of the machine learning and deep learning adoption rather than trying to fix fundamental problems later. The good news is that most of the popular XAI techniques basically are sensitivity analyses because they consist of a systematic perturbation of some model components in order to observe how it affects the model predictions. The techniques comprise random sampling, Monte-Carlo simulations, and ensemble runs, which are common methods in geosciences. Moreover, many XAI techniques are reusable because they are model-agnostic and must be applied after the model has been fitted. In addition, interpretability provides robust arguments when communicating machine and deep learning predictions to scientists and decision-makers.</p><p>In order to assist not only the practitioners but also the end-users in the evaluation of  machine and deep learning results, we will explain the intuition behind some popular techniques of XAI and aleatory and epistemic Uncertainty Quantification: (1) the Permutation Importance and Gaussian processes on the inputs (i.e., the perturbation of the model inputs), (2) the Monte-Carlo Dropout, Deep ensembles, Quantile Regression, and Gaussian processes on the weights (i.e, the perturbation of the model architecture), (3) the Conformal Predictors (useful to estimate the confidence interval on the outputs), and (4) the Layerwise Relevance Propagation (LRP), Shapley values, and Local Interpretable Model-Agnostic Explanations (LIME) (designed to visualize how each feature in the data affected a particular prediction). We will also introduce some best-practises, like the detection of anomalies in the training data before the training, the implementation of fallbacks when the prediction is not reliable, and physics-guided learning by including constraints in the loss function to avoid physical inconsistencies, like the violation of conservation laws. </p>


Author(s):  
Het Naik ◽  
Priyanka Goradia ◽  
Vomini Desai ◽  
Yukta Desai ◽  
Muralikrishna Iyyanki

This study explores Explainable Artificial Intelligence (XAI) in general and then talked about its potential use for the India Healthcare system. It also demonstrated some XAI techniques on a diabetes dataset with an aim to show practical implementation and implore the readers to think about more application areas. However, there are certain limitations of the technology which are highlighted along with the future scope in the discussion.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 8020
Author(s):  
Ahmad Kamal Mohd Nor ◽  
Srinivasa Rao Pedapati ◽  
Masdi Muhammad ◽  
Víctor Leiva

Surveys on explainable artificial intelligence (XAI) are related to biology, clinical trials, fintech management, medicine, neurorobotics, and psychology, among others. Prognostics and health management (PHM) is the discipline that links the studies of failure mechanisms to system lifecycle management. There is a need, which is still absent, to produce an analytical compilation of PHM-XAI works. In this paper, we use preferred reporting items for systematic reviews and meta-analyses (PRISMA) to present a state of the art on XAI applied to PHM of industrial assets. This work provides an overview of the trend of XAI in PHM and answers the question of accuracy versus explainability, considering the extent of human involvement, explanation assessment, and uncertainty quantification in this topic. Research articles associated with the subject, since 2015 to 2021, were selected from five databases following the PRISMA methodology, several of them related to sensors. The data were extracted from selected articles and examined obtaining diverse findings that were synthesized as follows. First, while the discipline is still young, the analysis indicates a growing acceptance of XAI in PHM. Second, XAI offers dual advantages, where it is assimilated as a tool to execute PHM tasks and explain diagnostic and anomaly detection activities, implying a real need for XAI in PHM. Third, the review shows that PHM-XAI papers provide interesting results, suggesting that the PHM performance is unaffected by the XAI. Fourth, human role, evaluation metrics, and uncertainty management are areas requiring further attention by the PHM community. Adequate assessment metrics to cater to PHM needs are requested. Finally, most case studies featured in the considered articles are based on real industrial data, and some of them are related to sensors, showing that the available PHM-XAI blends solve real-world challenges, increasing the confidence in the artificial intelligence models’ adoption in the industry.


Author(s):  
Xiaomo Jiang ◽  
Craig Foster

Gas turbine simple or combined cycle plants are built and operated with higher availability, reliability, and performance in order to provide the customer with sufficient operating revenues and reduced fuel costs meanwhile enhancing customer dispatch competitiveness. A tremendous amount of operational data is usually collected from the everyday operation of a power plant. It has become an increasingly important but challenging issue about how to turn this data into knowledge and further solutions via developing advanced state-of-the-art analytics. This paper presents an integrated system and methodology to pursue this purpose by automating multi-level, multi-paradigm, multi-facet performance monitoring and anomaly detection for heavy duty gas turbines. The system provides an intelligent platform to drive site-specific performance improvements, mitigate outage risk, rationalize operational pattern, and enhance maintenance schedule and service offerings via taking appropriate proactive actions. In addition, the paper also presents the components in the system, including data sensing, hardware, and operational anomaly detection, expertise proactive act of company, site specific degradation assessment, and water wash effectiveness monitoring and analytics. As demonstrated in two examples, this remote performance monitoring aims to improve equipment efficiency by converting data into knowledge and solutions in order to drive value for customers including lowering operating fuel cost and increasing customer power sales and life cycle value.


Sign in / Sign up

Export Citation Format

Share Document