Intelligent Control and Learning Systems - Data-Driven Fault Detection and Reasoning for Industrial Monitoring
Latest Publications


TOTAL DOCUMENTS

14
(FIVE YEARS 14)

H-INDEX

0
(FIVE YEARS 0)

Published By Springer Singapore

9789811680434, 9789811680441

Author(s):  
Jing Wang ◽  
Jinglin Zhou ◽  
Xiaolu Chen

AbstractThe previous chapters have described the mathematical principles and algorithms of multivariate statistical methods, as well as the monitoring processes when used for fault diagnosis. In order to validate the effectiveness of data-driven multivariate statistical analysis methods in the field of fault diagnosis, it is necessary to conduct the corresponding fault monitoring experiments. Therefore this chapter introduces two kinds of simulation platform, Tennessee Eastman (TE) process simulation system and fed-batch Penicillin Fermentation Process simulation system. They are widely used as test platforms for the process monitoring, fault classification, and identification of industrial process. The related experiments based on PCA, CCA, PLS, and FDA are completed on the TE simulation platforms.


Author(s):  
Jing Wang ◽  
Jinglin Zhou ◽  
Xiaolu Chen

AbstractIndustrial data variables show obvious high dimension and strong nonlinear correlation. Traditional multivariate statistical monitoring methods, such as PCA, PLS, CCA, and FDA, are only suitable for solving the high-dimensional data processing with linear correlation. The kernel mapping method is the most common technique to deal with the nonlinearity, which projects the original data in the low-dimensional space to the high-dimensional space through appropriate kernel functions so as to achieve the goal of linear separability in the new space. However, the space projection from the low dimension to the high dimension is contradictory to the actual requirement of dimensionality reduction of the data. So kernel-based method inevitably increases the complexity of data processing.


Author(s):  
Jing Wang ◽  
Jinglin Zhou ◽  
Xiaolu Chen

AbstractIt is found that the batch process is more difficultly monitored compared with the continuous process, due to its complex features, such as nonlinearity, non-stable operation, unequal production cycles, and most variables only measured at the end of batch. Traditional methods for batch process, such as multiway FDA (Chen 2004) and multi-model FDA (He et al. 2005), cannot solve these issues well. They require complete batch data only available at the end of a batch. Therefore, the complete batch trajectory must be estimated real time, or alternatively only the measured values at the current moment are used for online diagnosis. Moreover, the above approaches do not consider the problem of inconsistent production cycles.


Author(s):  
Jing Wang ◽  
Jinglin Zhou ◽  
Xiaolu Chen

AbstractAs mentioned in the previous chapter, industrial data are usually divided into two categories, process data and quality data, belonging to different measurement spaces. The vast majority of smart manufacturing problems, such as soft measurement, control, monitoring, optimization, etc., inevitably require modeling the data relationships between the two kinds of measurement variables. This chapter’s subject is to discover the correlation between the sets in different observation spaces.


Author(s):  
Jing Wang ◽  
Jinglin Zhou ◽  
Xiaolu Chen

AbstractQuality variables are measured much less frequently and usually with a significant time delay by comparison with the measurement of process variables. Monitoring process variables and their associated quality variables is essential undertaking as it can lead to potential hazards that may cause system shutdowns and thus possibly huge economic losses. Maximum correlation was extracted between quality variables and process variables by partial least squares analysis (PLS) (Kruger et al. 2001; Song et al. 2004; Li et al. 2010; Hu et al. 2013; Zhang et al. 2015).


Author(s):  
Jing Wang ◽  
Jinglin Zhou ◽  
Xiaolu Chen

AbstractFault detection and diagnosis (FDD) technology is a scientific field emerged in the middle of the twentieth century with the rapid development of science and data technology. It manifests itself as the accurate sensing of abnormalities in the manufacturing process, or the health monitoring of equipment, sites, or machinery in a specific operating site. FDD includes abnormality monitoring, abnormal cause identification, and root cause location.


Author(s):  
Jing Wang ◽  
Jinglin Zhou ◽  
Xiaolu Chen

AbstractIn many actual nonlinear systems, especially near the equilibrium point, linearity is the primary feature and nonlinearity is the secondary feature. For the system that deviates from the equilibrium point, the secondary nonlinearity or local structure feature can also be regarded as the small uncertainty part, just as the nonlinearity can be used to represent the uncertainty of a system (Wang et al. 2019). So this chapter also focuses on how to deal with the nonlinearity in PLS series method, but starts from an different view, i.e., robust PLS. Here the system nonlinearity is considered as uncertainty and a new robust $$\mathrm{L}_1$$ L 1 -PLS is proposed.


Author(s):  
Jing Wang ◽  
Jinglin Zhou ◽  
Xiaolu Chen

AbstractThis chapter proposes another nonlinear PLS method, named as locality-preserving partial least squares (LPPLS), which embeds the nonlinear degenerative and structure-preserving properties of LPP into the PLS model. The core of LPPLS is to replace the role of PCA in PLS with LPP. When extracting the principal components of $$\boldsymbol{t}_i$$ t i and $$\boldsymbol{u}_i$$ u i , two conditions must satisfy: (1) $$\boldsymbol{t}_i$$ t i and $$\boldsymbol{u}_i$$ u i retain the most information about the local nonlinear structure of their respective data sets. (2) The correlation between $$\boldsymbol{t}_i$$ t i and $$\boldsymbol{u}_i$$ u i is the largest. Finally, a quality-related monitoring strategy is established based on LPPLS.


Author(s):  
Jing Wang ◽  
Jinglin Zhou ◽  
Xiaolu Chen

AbstractThe traditional process monitoring method first projects the measured process data into the principle component subspace (PCS) and the residual subspace (RS), then calculates $$\mathrm T^2$$ T 2 and $$\mathrm SPE$$ S P E statistics to detect the abnormality. However, the abnormality by these two statistics are detected from the principle components of the process. Principle components actually have no specific physical meaning, and do not contribute directly to identify the fault variable and its root cause. Researchers have proposed many methods to identify the fault variable accurately based on the projection space. The most popular is contribution plot which measures the contribution of each process variable to the principal element (Wang et al. 2017; Luo et al. 2017; Liu and Chen 2014). Moreover, in order to determine the control limits of the two statistics, their probability distributions should be estimated or assumed as specific one. The fault identification by statistics is not intuitive enough to directly reflect the role and trend of each variable when the process changes.


Author(s):  
Jing Wang ◽  
Jinglin Zhou ◽  
Xiaolu Chen

AbstractOwing to the raised demands on process operation and product quality, the modern industrial process becomes more complicated when accompanied by the large number of process and quality variables produced. Therefore, quality-related fault detection and diagnosis are extremely necessary for complex industrial processes. Data-driven statistical process monitoring plays an important role in this topic for digging out the useful information from these highly correlated process and quality variables, because the quality variables are measured at a much lower frequency and usually have a significant time delay (Ding 2014; Aumi et al. 2013; Peng et al. 2015; Zhang et al. 2016; Yin et al. 2014). Monitoring the process variables related to the quality variables is significant for finding potential harm that may lead to system shutdown with possible enormous economic loss.


Sign in / Sign up

Export Citation Format

Share Document