scholarly journals Data-Driven Adaptive Observer for Fault Diagnosis

2012 ◽  
Vol 2012 ◽  
pp. 1-21 ◽  
Author(s):  
Shen Yin ◽  
Xuebo Yang ◽  
Hamid Reza Karimi

This paper presents an approach for data-driven design of fault diagnosis system. The proposed fault diagnosis scheme consists of an adaptive residual generator and a bank of isolation observers, whose parameters are directly identified from the process data without identification of complete process model. To deal with normal variations in the process, the parameters of residual generator are online updated by standard adaptive technique to achieve reliable fault detection performance. After a fault is successfully detected, the isolation scheme will be activated, in which each isolation observer serves as an indicator corresponding to occurrence of a particular type of fault in the process. The thresholds can be determined analytically or through estimating the probability density function of related variables. To illustrate the performance of proposed fault diagnosis approach, a laboratory-scale three-tank system is finally utilized. It shows that the proposed data-driven scheme is efficient to deal with applications, whose analytical process models are unavailable. Especially, for the large-scale plants, whose physical models are generally difficult to be established, the proposed approach may offer an effective alternative solution for process monitoring.

Author(s):  
Mouhib Alnoukari ◽  
Asim El Sheikh

Knowledge Discovery (KD) process model was first discussed in 1989. Different models were suggested starting with Fayyad’s et al (1996) process model. The common factor of all data-driven discovery process is that knowledge is the final outcome of this process. In this chapter, the authors will analyze most of the KD process models suggested in the literature. The chapter will have a detailed discussion on the KD process models that have innovative life cycle steps. It will propose a categorization of the existing KD models. The chapter deeply analyzes the strengths and weaknesses of the leading KD process models, with the supported commercial systems and reported applications, and their matrix characteristics.


2020 ◽  
Vol 10 (4) ◽  
pp. 1493 ◽  
Author(s):  
Kwanghoon Pio Kim

In this paper, we propose an integrated approach for seamlessly and effectively providing the mining and the analyzing functionalities to redesigning work for very large-scale and massively parallel process models that are discovered from their enactment event logs. The integrated approach especially aims at analyzing not only their structural complexity and correctness but also their animation-based behavioral properness, and becomes concretized to a sophisticated analyzer. The core function of the analyzer is to discover a very large-scale and massively parallel process model from a process log dataset and to validate the structural complexity and the syntactical and behavioral properness of the discovered process model. Finally, this paper writes up the detailed description of the system architecture with its functional integration of process mining and process analyzing. More precisely, we excogitate a series of functional algorithms for extracting the structural constructs and for visualizing the behavioral properness of those discovered very large-scale and massively parallel process models. As experimental validation, we apply the proposed approach and analyzer to a couple of process enactment event log datasets available on the website of the 4TU.Centre for Research Data.


Author(s):  
Hongyi Xu ◽  
Zhen Jiang ◽  
Daniel W. Apley ◽  
Wei Chen

Data-driven random process models have become increasingly important for uncertainty quantification (UQ) in science and engineering applications, due to their merit of capturing both the marginal distributions and the correlations of high-dimensional responses. However, the choice of a random process model is neither unique nor straightforward. To quantitatively validate the accuracy of random process UQ models, new metrics are needed to measure their capability in capturing the statistical information of high-dimensional data collected from simulations or experimental tests. In this work, two goodness-of-fit (GOF) metrics, namely, a statistical moment-based metric (SMM) and an M-margin U-pooling metric (MUPM), are proposed for comparing different stochastic models, taking into account their capabilities of capturing the marginal distributions and the correlations in spatial/temporal domains. This work demonstrates the effectiveness of the two proposed metrics by comparing the accuracies of four random process models (Gaussian process (GP), Gaussian copula, Hermite polynomial chaos expansion (PCE), and Karhunen–Loeve (K–L) expansion) in multiple numerical examples and an engineering example of stochastic analysis of microstructural materials properties. In addition to the new metrics, this paper provides insights into the pros and cons of various data-driven random process models in UQ.


Author(s):  
Kwanghoon Kim

Process (or business process) management systems fulfill defining, executing, monitoring and managing process models deployed on process-aware enterprises. Accordingly, the functional formation of the systems is made up of three subsystems such as modeling subsystem, enacting subsystem and mining subsystem. In recent times, the mining subsystem has been becoming an essential subsystem. Many enterprises have successfully completed the introduction and application of the process automation technology through the modeling subsystem and the enacting subsystem. According as the time has come to the phase of redesigning and reengineering the deployed process models, from now on it is important for the mining subsystem to cooperate with the analyzing subsystem; the essential cooperation capability is to provide seamless integrations between the designing works with the modeling subsystem and the redesigning work with the mining subsystem. In other words, we need to seamlessly integrate the discovery functionality of the mining subsystem and the analyzing functionality of the modeling subsystem. This integrated approach might be suitable very well when those deployed process models discovered by the mining subsystem are complex and very large-scaled, in particular. In this paper, we propose an integrated approach for seamlessly as well as effectively providing the mining and the analyzing functionalities to the redesigning work on very large-scale and massively parallel process models that are discovered from their enactment event logs. The integrated approach especially aims at analyzing not only their structural complexity and correctness but also their animation-based behavioral properness, and becomes concretized to a sophisticated analyzer. The core function of the analyzer is to discover a very large-scale and massively parallel process model from a process log dataset and to validate the structural complexity and the syntactical and behavioral properness of the discovered process model. Finally, this paper writes up the detailed description of the system architecture with its functional integration of process mining and process analyzing. And more precisely, we excogitate a series of functional algorithms for extracting the structural constructs as well as for visualizing the behavioral properness on those discovered very large-scale and massively parallel process models. As experimental validation, we apply the proposed approach and analyzer to a couple of process enactment event log datasets available on the website of the 4TU.Centre for Research Data.


Author(s):  
Nicole Zero ◽  
Joshua D. Summers

Abstract Current research and literature lack the discussion of how production automation is introduced to existing lines from the perspective of change management. This paper presents a case study conducted to understand the change management process for a large-scale automation implementation in a manufacturing environment producing highly complex products. Through a series of fifteen semi-structured interviews of eight engineers from three functional backgrounds, a process model was created to understand how the company of study introduced a new automation system into their existing production line, while also noting obstacles identified in the process. This process model illustrates the duration, sequencing, teaming, and complexity of the project. This model is compared to other change process models found in literature to understand critical elements found within change management. The process that was revealed in the case study appeared to contain some elements of a design process as compared to traditional change management processes found in literature. Finally, a collaborative resistance model is applied to the process model to identify and estimate the resistance for each task in the process. Based on the objective analysis of the collaborative situations, the areas of highest resistance are identified. By comparing the resistance model to the interview data, the results show that the resistance model does identify the challenges found in interviews. This means that the resistance model has the potential to identify obstacles within the process and open the opportunity to mitigate those challenges before they are encountered within the process.


2014 ◽  
Vol 556-562 ◽  
pp. 6089-6093
Author(s):  
Li Shao ◽  
Shu Sheng Zhang ◽  
Chao Zhou ◽  
Xiao Liang Bai

To reduce the workload for marking the dimensions of in-process models, a method of dimension marking based on manufacturing features for in-process model is proposed. Firstly, the dimension information is extracted from model based definition (MBD) models; secondly, the feature dimension chains are constructed according to the feature location dimension and the process data, when the process data and design data are not coincided with each other; thirdly, the process dimension tolerances are calculated by using the increasing links and the decreasing links of the MBD. In experimental section, an example was given to illustrate the validity and effectiveness of the proposed method.


2020 ◽  
Vol 26 (6) ◽  
pp. 1599-1617 ◽  
Author(s):  
Michael Leyer ◽  
Deniz Iren ◽  
Banu Aysolmaz

PurposeIdentifying handovers is an important but difficult to achieve goal for companies as handovers have advantages allowing for specialisation in processes as well as disadvantages by creating erroneous interfaces.Design/methodology/approachConceptualisation of a method based on theory and evaluation with company data using a process model repository.FindingsThe method allows to evaluate handovers from the perspective of roles in processes and grouping of employees in organisational units. It uses existing process model repositories connected with organisational chart information in companies to determine the density of handovers. The method is successfully evaluated using the example of a major telecommunications company with 1,010 process models in its repository.Practical implicationsCompanies can determine on various levels, up to the overall organisational level, in which parts of the company efforts are best spent to manage handovers in an optimal way.Originality/valueThis paper is first in showing how handovers can be conceptualised and identified with a large-scale method.


2020 ◽  
Vol 4 (Supplement_1) ◽  
pp. 263-263
Author(s):  
Konstantin Arbeev ◽  
Olivia Bagley ◽  
Arseniy Yashkin ◽  
Hongzhe Duan ◽  
Igor Akushevich ◽  
...  

Abstract Large-scale population-based data collecting repeated measures of biomarkers, follow-up data on events (incidence of diseases and mortality), and extensive genetic data provide excellent opportunities for applying statistical models for joint analyses of longitudinal dynamics of biomarkers and time-to-event outcomes that allow investigating dynamics of biomarkers and other relevant factors (including genetic) in relation to risks of diseases and death and how this may propagate to the future. Here we applied one such model, the stochastic process model (SPM), to data on longitudinal trajectories of different variables (comorbidity index, body mass index, cognitive scores), other relevant covariates (including genetic factors such as APOE polymorphisms and polygenic scores, PGS), and data on onset of Alzheimer’s disease (AD) in the Health and Retirement Study. We observed that different aging-related characteristics estimated from trajectories of respective variables in SPM are strongly associated with risks of onset of AD and found that these associations differ by sex, APOE status (carriers vs. non-carriers of APOE e4) and by PGS groups. The approach allows modeling and estimating time trends (e.g., by birth cohorts) in relevant dynamic characteristics in relation to the disease onset. These results provide building blocks for constructing the models for forecasting future trends and burden of AD that take into account dynamic relationships between individual trajectories of relevant repeatedly measured characteristics and the risk of the disease. Such models also provide the analytic framework for understanding AD in the context of aging and for finding genetic underpinnings of such links between AD and aging.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5009
Author(s):  
Farzin Piltan ◽  
Jong-Myon Kim

In this research, the aim is to investigate an adaptive digital twin algorithm for fault diagnosis and crack size identification in bearings. The main contribution of this research is to design an adaptive digital twin (ADT). The design of the ADT technique is based on two principles: normal signal modeling and estimation of signals. A combination of mathematical and data-driven techniques will be used to model the normal vibration signal. Therefore, in the first step, the normal vibration signal is modeled to increase the reliability of the modeling algorithm in the ADT. Then, to help challenge the complexity and uncertainty, the data-driven method will solve the problems of the mathematically based algorithm. Thus, first, Gaussian process regression is selected, and then, in two steps, we improve its resistance and accuracy by a Laguerre filter and fuzzy logic algorithm. After modeling the vibration signal, the second step is to design the data estimation for ADT. These signals are estimated by an adaptive observer. Therefore, a proportional-integral observer is then combined with the proposed technique for signal modeling. Then, in two stages, its robustness and reliability are strengthened using the Lyapunov-based algorithm and adaptive technique, respectively. After designing the ADT, the residual signals that are the difference between original and estimated signals are obtained. After that, the residual signals are resampled, and the root means square (RMS) signals are extracted from the residual signals. A support vector machine (SVM) is recommended for fault classification and crack size identification. The strength of the proposed technique is tested using the Case Western Reserve University Bearing Dataset (CWRUBD) under diverse torque loads, various motor speeds, and different crack sizes. In terms of fault diagnosis, the average detection accuracy in the proposed scheme is 95.75%. In terms of crack size identification for the roller, inner, and outer faults, the proposed scheme has average detection accuracies of 97.33%, 98.33%, and 98.33%, respectively.


Energies ◽  
2020 ◽  
Vol 13 (21) ◽  
pp. 5619
Author(s):  
Waqar Muhammad Ashraf ◽  
Ghulam Moeen Uddin ◽  
Ahmad Hassan Kamal ◽  
Muhammad Haider Khan ◽  
Awais Ahmad Khan ◽  
...  

Modern data analytics techniques and computationally inexpensive software tools are fueling the commercial applications of data-driven decision making and process optimization strategies for complex industrial operations. In this paper, modern and reliable process modeling techniques, i.e., multiple linear regression (MLR), artificial neural network (ANN), and least square support vector machine (LSSVM), are employed and comprehensively compared as reliable and robust process models for the generator power of a 660 MWe supercritical coal combustion power plant. Based on the external validation test conducted by the unseen operation data, LSSVM has outperformed the MLR and ANN models to predict the power plant’s generator power. Later, the LSSVM model is used for the failure mode recovery and a very successful operation control excellence tool. Moreover, by adjusting the thermo-electric operating parameters, the generator power on an average is increased by 1.74%, 1.80%, and 1.0 at 50% generation capacity, 75% generation capacity, and 100% generation capacity of the power plant, respectively. The process modeling based on process data and data-driven process optimization strategy building for improved process control is an actual realization of industry 4.0 in the industrial applications.


Sign in / Sign up

Export Citation Format

Share Document