scholarly journals Optimization of a 660 MWe Supercritical Power Plant Performance—A Case of Industry 4.0 in the Data-Driven Operational Management. Part 2. Power Generation

Energies ◽  
2020 ◽  
Vol 13 (21) ◽  
pp. 5619
Author(s):  
Waqar Muhammad Ashraf ◽  
Ghulam Moeen Uddin ◽  
Ahmad Hassan Kamal ◽  
Muhammad Haider Khan ◽  
Awais Ahmad Khan ◽  
...  

Modern data analytics techniques and computationally inexpensive software tools are fueling the commercial applications of data-driven decision making and process optimization strategies for complex industrial operations. In this paper, modern and reliable process modeling techniques, i.e., multiple linear regression (MLR), artificial neural network (ANN), and least square support vector machine (LSSVM), are employed and comprehensively compared as reliable and robust process models for the generator power of a 660 MWe supercritical coal combustion power plant. Based on the external validation test conducted by the unseen operation data, LSSVM has outperformed the MLR and ANN models to predict the power plant’s generator power. Later, the LSSVM model is used for the failure mode recovery and a very successful operation control excellence tool. Moreover, by adjusting the thermo-electric operating parameters, the generator power on an average is increased by 1.74%, 1.80%, and 1.0 at 50% generation capacity, 75% generation capacity, and 100% generation capacity of the power plant, respectively. The process modeling based on process data and data-driven process optimization strategy building for improved process control is an actual realization of industry 4.0 in the industrial applications.

Energies ◽  
2020 ◽  
Vol 13 (21) ◽  
pp. 5592
Author(s):  
Waqar Muhammad Ashraf ◽  
Ghulam Moeen Uddin ◽  
Syed Muhammad Arafat ◽  
Sher Afghan ◽  
Ahmad Hassan Kamal ◽  
...  

This paper presents a comprehensive step-wise methodology for implementing industry 4.0 in a functional coal power plant. The overall efficiency of a 660 MWe supercritical coal-fired plant using real operational data is considered in the study. Conventional and advanced AI-based techniques are used to present comprehensive data visualization. Monte-Carlo experimentation on artificial neural network (ANN) and least square support vector machine (LSSVM) process models and interval adjoint significance analysis (IASA) are performed to eliminate insignificant control variables. Effective and validated ANN and LSSVM process models are developed and comprehensively compared. The ANN process model proved to be significantly more effective; especially, in terms of the capacity to be deployed as a robust and reliable AI model for industrial data analysis and decision making. A detailed investigation of efficient power generation is presented under 50%, 75%, and 100% power plant unit load. Up to 7.20%, 6.85%, and 8.60% savings in heat input values are identified at 50%, 75%, and 100% unit load, respectively, without compromising the power plant’s overall thermal efficiency.


2014 ◽  
Vol 20 (6) ◽  
pp. 794-815 ◽  
Author(s):  
Xinwei Zhu ◽  
Jan Recker ◽  
Guobin Zhu ◽  
Flávia Maria Santoro

Purpose – Context-awareness has emerged as an important principle in the design of flexible business processes. The goal of the research is to develop an approach to extend context-aware business process modeling toward location-awareness. The purpose of this paper is to identify and conceptualize location-dependencies in process modeling. Design/methodology/approach – This paper uses a pattern-based approach to identify location-dependency in process models. The authors design specifications for these patterns. The authors present illustrative examples and evaluate the identified patterns through a literature review of published process cases. Findings – This paper introduces location-awareness as a new perspective to extend context-awareness in BPM research, by introducing relevant location concepts such as location-awareness and location-dependencies. The authors identify five basic location-dependent control-flow patterns that can be captured in process models. And the authors identify location-dependencies in several existing case studies of business processes. Research limitations/implications – The authors focus exclusively on the control-flow perspective of process models. Further work needs to extend the research to address location-dependencies in process data or resources. Further empirical work is needed to explore determinants and consequences of the modeling of location-dependencies. Originality/value – As existing literature mostly focusses on the broad context of business process, location in process modeling still is treated as “second class citizen” in theory and in practice. This paper discusses the vital role of location-dependencies within business processes. The proposed five basic location-dependent control-flow patterns are novel and useful to explain location-dependency in business process models. They provide a conceptual basis for further exploration of location-awareness in the management of business processes.


2012 ◽  
Vol 2012 ◽  
pp. 1-21 ◽  
Author(s):  
Shen Yin ◽  
Xuebo Yang ◽  
Hamid Reza Karimi

This paper presents an approach for data-driven design of fault diagnosis system. The proposed fault diagnosis scheme consists of an adaptive residual generator and a bank of isolation observers, whose parameters are directly identified from the process data without identification of complete process model. To deal with normal variations in the process, the parameters of residual generator are online updated by standard adaptive technique to achieve reliable fault detection performance. After a fault is successfully detected, the isolation scheme will be activated, in which each isolation observer serves as an indicator corresponding to occurrence of a particular type of fault in the process. The thresholds can be determined analytically or through estimating the probability density function of related variables. To illustrate the performance of proposed fault diagnosis approach, a laboratory-scale three-tank system is finally utilized. It shows that the proposed data-driven scheme is efficient to deal with applications, whose analytical process models are unavailable. Especially, for the large-scale plants, whose physical models are generally difficult to be established, the proposed approach may offer an effective alternative solution for process monitoring.


1998 ◽  
Vol 120 (1) ◽  
pp. 109-119 ◽  
Author(s):  
T. Warren Liao ◽  
L. J. Chen

It has been shown that a manufacturing process can be modeled (learned) using Multi-Layer Perceptron (MLP) neural network and then optimized directly using the learned network. This paper extends the previous work by examining several different MLP training algorithms for manufacturing process modeling and three methods for process optimization. The transformation method is used to convert a constrained objective function into an unconstrained one, which is then used as the error function in the process optimization stage. The simulation results indicate that: (i) the conjugate gradient algorithms with backtracking line search outperform the standard BP algorithm in convergence speed; (ii) the neural network approaches could yield more accurate process models than the regression method; (iii) the BP with simulated annealing method is the most reliable optimization method to generate the best optimal solution, and (iv) process optimization directly performed on the neural network is possible but cannot be especially automated totally, especially when the process concerned is a mixed integer problem.


Author(s):  
Artur M. Schweidtmann ◽  
Jana M. Weber ◽  
Christian Wende ◽  
Linus Netze ◽  
Alexander Mitsos

AbstractData-driven models are becoming increasingly popular in engineering, on their own or in combination with mechanistic models. Commonly, the trained models are subsequently used in model-based optimization of design and/or operation of processes. Thus, it is critical to ensure that data-driven models are not evaluated outside their validity domain during process optimization. We propose a method to learn this validity domain and encode it as constraints in process optimization. We first perform a topological data analysis using persistent homology identifying potential holes or separated clusters in the training data. In case clusters or holes are identified, we train a one-class classifier, i.e., a one-class support vector machine, on the training data domain and encode it as constraints in the subsequent process optimization. Otherwise, we construct the convex hull of the data and encode it as constraints. We finally perform deterministic global process optimization with the data-driven models subject to their respective validity constraints. To ensure computational tractability, we develop a reduced-space formulation for trained one-class support vector machines and show that our formulation outperforms common full-space formulations by a factor of over 3000, making it a viable tool for engineering applications. The method is ready-to-use and available open-source as part of our MeLOn toolbox (https://git.rwth-aachen.de/avt.svt/public/MeLOn).


Author(s):  
Susumu Naito ◽  
Yasunori Taguchi ◽  
Yuichi Kato ◽  
Kouta Nakata ◽  
Ryota Miyake ◽  
...  

Abstract In a large-scale plant such as a nuclear power plant, thousands of process values are measured for the purpose of monitoring the plant performance and the health of various systems. It is difficult for plant operators to constantly monitor all of the process values. We present a new data-driven method to monitor many process values and to enable early detection of anomaly signs including unknown events with few false detections. In order to accurately predict the process values in the normal state, we created a two-stage model composed of a time window autoencoder and a deviation autoencoder. The two-stage model handles a large number of process values, their rapid changes of the process values such as an operation mode change, changes of the process values in both the steady and the transient states, and the external disturbances such as exogenous noise, atmospheric temperature, etc. The time window autoencoder examines time correlations of time series process values while the deviation autoencoder treats correlations of variation due to external factors. We evaluated a predicting ability of the rapid changes, detection performances in the transient state, and detection performances under noisy conditions with simulated process values of a nuclear power plant, a 1,100 MW Boiling Water Reactor having 3,100 analog process values. The two-stage model clearly showed a good anomaly detection performance with zero or few false detections. The two-stage model would be an effective solution for plant monitoring and early detection of anomaly signs.


2019 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Cristina Orsolin Klingenberg ◽  
Marco Antônio Viana Borges ◽  
José Antônio Valle Antunes Jr

Purpose The purpose of this paper is to identify current technologies related to Industry 4.0 and to develop a rationale to enhance the understanding of their functions within a data-driven paradigm. Design/methodology/approach A systematic literature review of 119 papers published in journals included in the Journal Citation Report (JCR) was conducted to identify Industry 4.0 technologies. A descriptive analysis characterizes the corpus, and a content analysis identifies the technologies. Findings The content analysis identified 111 technologies. These technologies perform four functions related to data: data generation and capture, data transmission, data conditioning, storage and processing and data application. The first three groups consist of enabling technologies and the fourth group of value-creating technologies. Results show that Industry 4.0 publications focus on enabling technologies that transmit and process data. Value-creating technologies, which apply data in order to develop new solutions, are still rare in the literature. Research limitations/implications The proposed framework serves as a structure for analysing the focus of publications over time, and enables the classification of new technologies as the paradigm evolves. Practical implications Because the technical side of the new production paradigm is complex and represents an evolving field, managers benefit from a simplified and data-driven approach. The proposed framework suggests that Industry 4.0 should be approached by looking at how data can create value and at what role each technology plays in this task. Originality/value The study makes a direct link between Industry 4.0 technologies and the key resource of this revolution, i.e. data. It provides a rationale that not only establishes relationships between technologies and data, but also highlights their roles as enablers or creators of value. Beyond showing the current focus of Industry 4.0 publications, this paper proposes a framework that is useful for tracking the evolution of the paradigm.


2021 ◽  
Author(s):  
Mariano Nicolas Cruz-Bournazou ◽  
Harini Narayanan ◽  
Alessandro Fagnani ◽  
Alessandro Butte

Hybrid modeling, meaning the integration of data-driven and knowledge-based methods, is quickly gaining popularity among many research fields, including bioprocess engineering and development. Recently, the data-driven part of hybrid methods have been largely extended with machine learning algorithms (e.g., artificial neural network, support vector regression), while the mechanistic part is typically using differential equations to describe the dynamics of the process based on its current state. In this work we present an alternative hybrid model formulation that merges the advantages of Gaussian Process State Space Models and the numerical approximation of differential equation systems through full discretization. The use of Gaussian Process Models to describe complex bioprocesses in batch, fed-batch, has been reported in several applications. Nevertheless, handling the dynamics of the states of the system, known to have a continuous time-dependent evolution governed by implicit dynamics, has proven to be a major challenge. Discretization of the process on the sampling steps is a source of several complications, as are: 1) not being able to handle multi-rate date sets, 2) the step-size of the derivative approximation is defined by the sampling frequency, and 3) a high sensitivity to sampling and addition errors. We present a coupling of polynomial regression with Gaussian Process Models as representation of the right-hand side of the ordinary differential equation system and demonstrate the advantages in a typical fed-batch cultivation for monoclonal antibody production.


Author(s):  
Sandip Kumar Lahiri ◽  
Nadeem Khalfe

This paper presents artificial intelligence-based process modeling and optimization strategies, namely, support vector regression – differential evolution (SVR-DE) for modeling and optimization of catalytic industrial ethylene oxide (EO) reactor. In the SVR-DE approach, a support vector regression model is constructed for correlating process data comprising values of operating and performance variables. Next, model inputs describing process operating variables are optimized using Differential Evolution (DE) with a view to maximize the process performance. DE possesses certain unique advantages over the commonly used gradient-based deterministic optimization algorithms. The SVR-DE is a new strategy for chemical process modeling and optimization. The major advantage of the strategy is that modeling and optimization can be conducted exclusively from the historic process data wherein the detailed knowledge of process phenomenology (reaction mechanism, kinetics, etc.) is not required. Using SVR-DE strategy, a number of sets of optimized operating conditions leading to maximized EO production and catalyst selectivity were obtained. The optimized solutions, when verified in an actual plant, resulted in a significant improvement in the EO production rate and catalyst selectivity.


Sign in / Sign up

Export Citation Format

Share Document