Data Segmentation Criteria Assessment for Fault Detection Techniques Based on Principal Component Analysis for Natural Gas Transmission System

Author(s):  
Cinthia Audivet ◽  
Horacio Pinzón ◽  
Jesus García ◽  
Marlon Consuegra ◽  
Javier Alexander ◽  
...  

Statistical analytics, as a data extraction and fault detection strategy, may incorporate segmentation techniques to overcome its underlying limitations and drawbacks. Merging both techniques shall provide a more robust monitoring structure to address the proper identification of normal and abnormal conditions, to improve the extraction of fundamental correlation among variables, and to improve the separation of both main variation and natural variation (noise) subspaces. This additional feature is key to limit the false alarm rate and to optimize the fault detection time when it is implemented on industrial applications. This paper presents an analysis to determine whether a segmentation approach, as a previous step of detection, enhances the fault detection strategies, specifically the principal component analysis performance. The data segmentation criteria assessed in this study includes two approaches: a) Sources (well) of the transmitted natural gas and b) Promigas’ natural gas pipeline division defined by the Energy and Gas Regulation Commission (CREG in Spanish). The performance assessment of segmentation criteria was carried out evaluating the false alarm rate and detection time when the natural gas transmission network presents faults of different magnitude. The results show that the implementation of a segmentation criteria provides an advantage in terms of the detection time, but it depends of the fault magnitude and the number of clusters. The detection time is improved by 25% in the case scenario I, when transition zones are considered. On the other hand, the detection time is slightly better with less than 10% in the case scenario II, where the segmentation is geographical.

TAPPI Journal ◽  
2014 ◽  
Vol 13 (1) ◽  
pp. 33-41
Author(s):  
YVON THARRAULT ◽  
MOULOUD AMAZOUZ

Recovery boilers play a key role in chemical pulp mills. Early detection of defects, such as water leaks, in a recovery boiler is critical to the prevention of explosions, which can occur when water reaches the molten smelt bed of the boiler. Early detection is difficult to achieve because of the complexity and the multitude of recovery boiler operating parameters. Multiple faults can occur in multiple components of the boiler simultaneously, and an efficient and robust fault isolation method is needed. In this paper, we present a new fault detection and isolation scheme for multiple faults. The proposed approach is based on principal component analysis (PCA), a popular fault detection technique. For fault detection, the Mahalanobis distance with an exponentially weighted moving average filter to reduce the false alarm rate is used. This filter is used to adapt the sensitivity of the fault detection scheme versus false alarm rate. For fault isolation, the reconstruction-based contribution is used. To avoid a combinatorial excess of faulty scenarios related to multiple faults, an iterative approach is used. This new method was validated using real data from a pulp and paper mill in Canada. The results demonstrate that the proposed method can effectively detect sensor faults and water leakage.


Author(s):  
Horacio Pinzón ◽  
Cinthia Audivet ◽  
Melitsa Torres ◽  
Javier Alexander ◽  
Marco Sanjuán

Sustainability of natural gas transmission infrastructure is highly related to the system’s ability to decrease emissions due to ruptures or leaks. Although traditionally such detection relies in alarm management system and operator’s expertise, given the system’s nature as large-scale, complex, and with vast amount of information available, such alarm generation is better suited for a fault detection system based on data-driven techniques. This would allow operators and engineers to have a better framework to address the online data being gathered. This paper presents an assessment on multiple fault-case scenarios in critical infrastructure using two different data-driven based fault detection algorithms: Principal component analysis (PCA) and its dynamic variation (DPCA). Both strategies are assessed under fault scenarios related to natural gas transmission systems including pipeline leakage due to structural failure and flow interruption due to emergency valve shut down. Performance evaluation of fault detection algorithms is carried out based on false alarm rate, detection time and misdetection rate. The development of modern alarm management frameworks would have a significant contribution in natural gas transmission systems’ safety, reliability and sustainability.


2021 ◽  
Vol 11 (14) ◽  
pp. 6370
Author(s):  
Elena Quatrini ◽  
Francesco Costantino ◽  
David Mba ◽  
Xiaochuan Li ◽  
Tat-Hean Gan

The water purification process is becoming increasingly important to ensure the continuity and quality of subsequent production processes, and it is particularly relevant in pharmaceutical contexts. However, in this context, the difficulties arising during the monitoring process are manifold. On the one hand, the monitoring process reveals various discontinuities due to different characteristics of the input water. On the other hand, the monitoring process is discontinuous and random itself, thus not guaranteeing continuity of the parameters and hindering a straightforward analysis. Consequently, further research on water purification processes is paramount to identify the most suitable techniques able to guarantee good performance. Against this background, this paper proposes an application of kernel principal component analysis for fault detection in a process with the above-mentioned characteristics. Based on the temporal variability of the process, the paper suggests the use of past and future matrices as input for fault detection as an alternative to the original dataset. In this manner, the temporal correlation between process parameters and machine health is accounted for. The proposed approach confirms the possibility of obtaining very good monitoring results in the analyzed context.


Author(s):  
Hongjuan Yao ◽  
Xiaoqiang Zhao ◽  
Wei Li ◽  
Yongyong Hui

Batch process generally has varying dynamic characteristic that causes low fault detection rate and high false alarm rate, and it is necessary and urgent to monitor batch process. This paper proposes a global enhanced multiple neighborhoods preserving embedding based fault detection strategy for dynamic batch process. Firstly, the angle neighbor is defined and selected to compensate for the insufficient expression for the spatial similarity of samples only by using the distance neighbor, and the time neighbor is introduced to describe the time correlations between samples. These three types of neighbors can fully characterize the similarity of the samples in time and space. Secondly, considering the minimum reconstruction error and the order information of three types of neighbors, an enhanced objective function is constructed to prevent the loss of order information when neighborhood preserving embedding (NPE) calculates the reconstruction weights. Furthermore, the enhanced objective function and a global objective function are organically combined to extract both global and local features, to describe process dynamics and visualize process data in a low-dimensional space. Finally, a monitoring index based on support vector data description is constructed to eliminate adverse effects of non-Gaussian data for monitoring performance. The advantages of the proposed method over principal component analysis, neighborhood preserving embedding, dynamic principal component analysis and time NPE are demonstrated by a numerical example and the penicillin fermentation process simulation.


Sign in / Sign up

Export Citation Format

Share Document