scholarly journals An improved approach for fault detection by simultaneous overcoming of high-dimensionality, autocorrelation, and time-variability

PLoS ONE ◽  
2020 ◽  
Vol 15 (12) ◽  
pp. e0243146
Author(s):  
Nastaran Hajarian ◽  
Farzad Movahedi Sobhani ◽  
Seyed Jafar Sadjadi

The control charts with the Principal Component Analysis (PCA) approach and its extension are among the data-driven methods for process monitoring and the detection of faults. Industrial processing data involves complexities such as high dimensionality, auto-correlation, and non-stationary which may occur simultaneously. An efficient fault detection technique is an approach that is robust against data training, sensitive to all the feasible faults of the process, and agile to the detection of the faults. To date, approaches such as the recursive PCA (RPCA) model and the moving-window PCA (MWPCA) model have been proposed when data is high-dimensional and non-stationary or dynamic PCA (DPCA) model and its extension have been suggested for autocorrelation data. But, using the techniques listed without considering all aspects of the process data increases fault detection indicators such as false alarm rate (FAR), delay time detection (DTD), and confuses the operator or causes adverse consequences. A new PCA monitoring method is proposed in this study, which can simultaneously reduce the impact of high-dimensionality, non-stationary, and autocorrelation properties. This technique utilizes DPCA property to decrease the effect of autocorrelation and adaptive behavior of MWPCA to control non-stationary characteristics. The proposed approach has been tested on the Tennessee Eastman Process (TEP). The findings suggest that the proposed approach is capable of detecting various forms of faults and comparing attempts to improve the detection of fault indicators with other approaches. The empirical application of the proposed approach has been implemented on a turbine exit temperature (TET). The results demonstrate that the proposed approach has detected a real fault successfully.

2019 ◽  
Vol 42 (6) ◽  
pp. 1225-1238 ◽  
Author(s):  
Wahiba Bounoua ◽  
Amina B Benkara ◽  
Abdelmalek Kouadri ◽  
Azzeddine Bakdi

Principal component analysis (PCA) is a common tool in the literature and widely used for process monitoring and fault detection. Traditional PCA is associated with the two well-known control charts, the Hotelling’s T2 and the squared prediction error (SPE), as monitoring statistics. This paper develops the use of new measures based on a distribution dissimilarity technique named Kullback-Leibler divergence (KLD) through PCA by measuring the difference between online estimated and offline reference density functions. For processes with PCA scores following a multivariate Gaussian distribution, KLD is computed on both principal and residual subspaces defined by PCA in a moving window to extract the local disparity information. The potentials of the proposed algorithm are afterwards demonstrated through an application on two well-known processes in chemical industries; the Tennessee Eastman process as a reference benchmark and three tank system as an experimental validation. The monitoring performance was compared to recent results from other multivariate statistical process monitoring (MSPM) techniques. The proposed method showed superior robustness and effectiveness recording the lowest average missed detection rate and false alarm rates in process fault detection.


Processes ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 166
Author(s):  
Majed Aljunaid ◽  
Yang Tao ◽  
Hongbo Shi

Partial least squares (PLS) and linear regression methods are widely utilized for quality-related fault detection in industrial processes. Standard PLS decomposes the process variables into principal and residual parts. However, as the principal part still contains many components unrelated to quality, if these components were not removed it could cause many false alarms. Besides, although these components do not affect product quality, they have a great impact on process safety and information about other faults. Removing and discarding these components will lead to a reduction in the detection rate of faults, unrelated to quality. To overcome the drawbacks of Standard PLS, a novel method, MI-PLS (mutual information PLS), is proposed in this paper. The proposed MI-PLS algorithm utilizes mutual information to divide the process variables into selected and residual components, and then uses singular value decomposition (SVD) to further decompose the selected part into quality-related and quality-unrelated components, subsequently constructing quality-related monitoring statistics. To ensure that there is no information loss and that the proposed MI-PLS can be used in quality-related and quality-unrelated fault detection, a principal component analysis (PCA) model is performed on the residual component to obtain its score matrix, which is combined with the quality-unrelated part to obtain the total quality-unrelated monitoring statistics. Finally, the proposed method is applied on a numerical example and Tennessee Eastman process. The proposed MI-PLS has a lower computational load and more robust performance compared with T-PLS and PCR.


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Fang Wu ◽  
Shen Yin ◽  
Hamid Reza Karimi

For the complex industrial process, it has become increasingly challenging to effectively diagnose complicated faults. In this paper, a combined measure of the original Support Vector Machine (SVM) and Principal Component Analysis (PCA) is provided to carry out the fault classification, and compare its result with what is based on SVM-RFE (Recursive Feature Elimination) method. RFE is used for feature extraction, and PCA is utilized to project the original data onto a lower dimensional space. PCAT2, SPE statistics, and original SVM are proposed to detect the faults. Some common faults of the Tennessee Eastman Process (TEP) are analyzed in terms of the practical system and reflections of the dataset. PCA-SVM and SVM-RFE can effectively detect and diagnose these common faults. In RFE algorithm, all variables are decreasingly ordered according to their contributions. The classification accuracy rate is improved by choosing a reasonable number of features.


2021 ◽  
Author(s):  
Merim Dzaferagic ◽  
Nicola Marchetti ◽  
Irene Macaluso

This paper addresses the issue of reliability in Industrial Internet of Things (IIoT) in case of missing sensors measurements due to network or hardware problems. We propose to support the fault detection and classification modules, which are the two critical components of a monitoring system for IIoT, with a generative model. The latter is responsible of imputing missing sensor measurements so that the monitoring system performance is robust to missing data. In particular, we adopt Generative Adversarial Networks (GANs) to generate missing sensor measurements and we propose to fine-tune the training of the GAN based on the impact that the generated data have on the fault detection and classification modules. We conduct a thorough evaluation of the proposed approach using the extended Tennessee Eastman Process dataset. Results show that the GAN-imputed data mitigate the impact on the fault detection and classification even in the case of persistently missing measurements from sensors that are critical for the correct functioning of the monitoring system.


Water ◽  
2018 ◽  
Vol 10 (7) ◽  
pp. 938
Author(s):  
Jesús Mercado ◽  
Pablo León ◽  
Soluna Salles ◽  
Dolores Cortés ◽  
Lidia Yebra ◽  
...  

In the Bay of Algeciras (BA), intensive urban and industrial activityis underway, which is potentially responsible for the release of significant quantities of nutrients. However, the assessment of the impact of these discharges is complex. Nutrient concentration in the surface layer is per se strongly variable due to the variability associated with the upwelling of nutrient-enriched deep Mediterranean water (MW), which in turn is regulated by atmospheric forcing. The aim of this study is to determine the effects of changes in the upwelling intensity on the load of nitrate and phosphate in the BA and to appraise their impact on chlorophyll a variability. Based on this analysis, the possible influence of the nutrients released from land-based sources is indirectly inferred. Data and samples collected during nine research cruises carried out in different seasonal cycle periods between 2010 and 2015 in the BA were analysed. The vertical variation of temperature and salinity indicates that the MW upwelling was favoured in spring, as occurred in other coastal areas of the northern Alboran Sea. However, principal component analysis conducted on physical and chemical data reveals that shifts in nutrients and chlorophyll a in the euphotic layer are poorly explained by changes in the upwelling intensity. Furthermore, during some of these research surveys (particularly in summer), chlorophyll a concentrations were higher in the BA as compared to a nearby coastal area also affected by MW upwelling. Scarce information about land-based pollution sources precludes quantitative analysis of the impact of nutrient loads on water quality; however, the available data suggest that the main source of allochthanous inorganic nitrogen over the period 2010–2015 in the BA was nitrate. Therefore, it is reasonable to hypothesize that the high concentrations of nitrate and chlorophyll a in BA in summer are a consequence of those discharges. Our study highlights the need of more exhaustive inventories of sewage and river discharges to adequately rate their impact in the BA.


2019 ◽  
Vol 52 (5-6) ◽  
pp. 387-398
Author(s):  
Lei Tan ◽  
Peng Li ◽  
Aimin Miao ◽  
Yong Chen

This study aims to solve the problem involving the high false alarm rate experienced during the detection process when using the traditional multivariate statistical process monitoring method. In addition, the existing model cannot be updated according to the actual situation. This article proposes a novel adaptive neighborhood preserving embedding algorithm as well as an online fault-detection approach based on adaptive neighborhood preserving embedding. This approach combines the approximate linear dependence condition with neighborhood preserving embedding. According to the newly proposed update strategy, the algorithm can achieve an adaptive update model that realizes the online fault detection of processes. The effectiveness and feasibility of the proposed approach are verified by experiments of the Tennessee Eastman process. Theoretical analysis and application experiment of Tennessee Eastman process demonstrate that in this article proposed fault-detection method based on adaptive neighborhood preserving embedding can effectively reduce the false alarm rate and improve the fault-detection performance.


2017 ◽  
Vol 2017 ◽  
pp. 1-8
Author(s):  
Lingling Ma ◽  
Xiangshun Li

The model-based fault detection technique, which needs to identify the system models, has been well established. The objective of this paper is to develop an alternative procedure instead of identifying the system models. In this paper, subspace method aided data-driven fault detection based on principal component analysis (PCA) is proposed. The basic idea is to use PCA to identify the system observability matrices from input and output data and construct residual generators. The advantage of the proposed method is that we just need to identify the parameterized matrices related to residuals rather than the system models, which reduces the computational steps of the system. The proposed approach is illustrated by a simulation study on the Tennessee Eastman process.


2019 ◽  
Vol 15 (11) ◽  
pp. 155014771988549
Author(s):  
Xuanyue Wang ◽  
Xu Yang ◽  
Jian Huang ◽  
Xianzhong Chen

Large-scale process monitoring has become a challenging issue due to the integration of sub-systems or subprocesses, leading to numerous variables with complex relationship and potential missing information in modern industrial processes. To avoid this, a distributed expectation maximization-principal component analysis scheme is proposed in this paper, where the process variables are first divided into several sub-blocks using two-layer process decomposition method, based on knowledge and generalized Dice’s coefficient. Then, the missing information of variables is estimated by expectation maximization algorithm in the principal component analysis framework, then the expectation maximization-principal component analysis method is applied for fault detection to each sub-block. Finally, the process monitoring and fault detection results are fused by Bayesian inference technique. Case studies on the Tennessee Eastman process is applied to show the effectiveness and performance of our proposed approach.


2021 ◽  
Author(s):  
Merim Dzaferagic ◽  
Nicola Marchetti ◽  
Irene Macaluso

This paper addresses the issue of reliability in Industrial Internet of Things (IIoT) in case of missing sensors measurements due to network or hardware problems. We propose to support the fault detection and classification modules, which are the two critical components of a monitoring system for IIoT, with a generative model. The latter is responsible of imputing missing sensor measurements so that the monitoring system performance is robust to missing data. In particular, we adopt Generative Adversarial Networks (GANs) to generate missing sensor measurements and we propose to fine-tune the training of the GAN based on the impact that the generated data have on the fault detection and classification modules. We conduct a thorough evaluation of the proposed approach using the extended Tennessee Eastman Process dataset. Results show that the GAN-imputed data mitigate the impact on the fault detection and classification even in the case of persistently missing measurements from sensors that are critical for the correct functioning of the monitoring system.


Sign in / Sign up

Export Citation Format

Share Document