scholarly journals Hybrid Models for Efficient Control, Optimization, and Monitoring of Thermo-Chemical Processes and Plants

Processes ◽  
2021 ◽  
Vol 9 (3) ◽  
pp. 515
Author(s):  
Thomas Freudenmann ◽  
Hans-Joachim Gehrmann ◽  
Krasimir Aleksandrov ◽  
Mohanad El-Haji ◽  
Dieter Stapf

This paper describes a procedure and an IT product that combine numerical models, expert knowledge, and data-based models through artificial intelligence (AI)-based hybrid models to enable the integrated control, optimization, and monitoring of processes and plants. The working principle of the hybrid model is demonstrated by NOx reduction through guided oscillating combustion at the pulverized fuel boiler pilot incineration plant at the Institute for Technical Chemistry, Karlsruhe Institute of Technology. The presented example refers to coal firing, but the approach can be easily applied to any other type of nitrogen-containing solid fuel. The need for a reduction in operation and maintenance costs for biomass-fired plants is huge, especially in the frame of emission reductions and, in the case of Germany, the potential loss of funding as a result of the Renewable Energy Law (Erneuerbare-Energien-Gesetz) for plants older than 20 years. Other social aspects, such as the departure of experienced personnel may be another reason for the increasing demand for data mining and the use of artificial intelligence (AI).

Author(s):  
Stanislaw Stanek ◽  
Maciej Gawinecki ◽  
Malgorzata Pankowska ◽  
Shahram Rahimi

The origins of the software agent concept are often traced back to the pioneers of artificial intelligence—John Mc Carthy, the creator of LISP programming language, and Carl Hewitt, the father of distributed artificial intelligence (DAI). Kay (1984, p. 84) states that: …the idea of an agent originated with John McCarthy in the mid-1950s, and the term was coined by Oliver G. Selfridge a few years later, when they were both at the Massachusetts Institute of Technology. They had in view a system that, when given a goal, could carry out the details of the appropriate computer operations and could ask for and receive advice, offered in human terms, when it was stuck. An agent would be a ‘soft robot’ living and doing its business within the computer’s world. Nwana (1996, p. 205), on the other hand, claims that: …software agents have evolved from multi-agent systems (MAS), which in turn form one of three broad areas which fall under DAI, the other two being Distributed Problem Solving (DPS) and Parallel Artificial Intelligence (PAI). (…) The concept of an agent (…) can be traced back to the early days of research into DAI in the 1970s – indeed, to Carl Hewitt’s concurrent Actor model. In this model, Hewitt proposed the concept of a self-contained, interactive and concurrently-executing object which he termed ‘Actor’. This object had some encapsulated internal state and could respond to messages from other similar objects1. The software agent concept meant, in the first place, replacing the idea of an expert, which was at the core of earlier support systems, with the metaphor of an assistant. Until 1990s, decision support systems (DSS) were typically built around databases, models, expert systems, rules, simulators, and so forth. Although they could offer considerable support to the rational manager, whose decision making style would rely on quantitative terms, they had little to offer to managers who were guided by intuition. Software agents promised a new paradigm in which DSS designers would aim to augment the capabilities of individuals and organizations by deploying intelligent tools and autonomous assistants. The concept thus heralded a pivotal change in the way computer support is devised. For one thing, it called for a certain degree of intelligence on the part of the computerized tool; for another, it shifted emphasis from the delivery of expert advice toward providing support for the user’s creativity (King, 1993).


Water ◽  
2020 ◽  
Vol 12 (11) ◽  
pp. 3221
Author(s):  
Lucie Dal Soglio ◽  
Charles Danquigny ◽  
Naomi Mazzilli ◽  
Christophe Emblanch ◽  
Gérard Massonnat

The main outlets of karst systems are springs, the hydrographs of which are largely affected by flow processes in the unsaturated zone. These processes differ between the epikarst and transmission zone on the one hand and the matrix and conduit on the other hand. However, numerical models rarely consider the unsaturated zone, let alone distinguishing its subsystems. Likewise, few models represent conduits through a second medium, and even fewer do this explicitly with discrete features. This paper focuses on the interest of hybrid models that take into account both unsaturated subsystems and discrete conduits to simulate the reservoir-scale response, especially the outlet hydrograph. In a synthetic karst aquifer model, we performed simulations for several parameter sets and showed the ability of hybrid models to simulate the overall response of complex karst aquifers. Varying parameters affect the pathway distribution and transit times, which results in a large variety of hydrograph shapes. We propose a classification of hydrographs and selected characteristics, which proves useful for analysing the results. The relationships between model parameters and hydrograph characteristics are not all linear; some of them have local extrema or threshold limits. The numerous simulations help to assess the sensitivity of hydrograph characteristics to the different parameters and, conversely, to identify the key parameters which can be manipulated to enhance the modelling of field cases.


Entropy ◽  
2020 ◽  
Vol 22 (9) ◽  
pp. 989
Author(s):  
Rui Ying Goh ◽  
Lai Soon Lee ◽  
Hsin-Vonn Seow ◽  
Kathiresan Gopal

Credit scoring is an important tool used by financial institutions to correctly identify defaulters and non-defaulters. Support Vector Machines (SVM) and Random Forest (RF) are the Artificial Intelligence techniques that have been attracting interest due to their flexibility to account for various data patterns. Both are black-box models which are sensitive to hyperparameter settings. Feature selection can be performed on SVM to enable explanation with the reduced features, whereas feature importance computed by RF can be used for model explanation. The benefits of accuracy and interpretation allow for significant improvement in the area of credit risk and credit scoring. This paper proposes the use of Harmony Search (HS), to form a hybrid HS-SVM to perform feature selection and hyperparameter tuning simultaneously, and a hybrid HS-RF to tune the hyperparameters. A Modified HS (MHS) is also proposed with the main objective to achieve comparable results as the standard HS with a shorter computational time. MHS consists of four main modifications in the standard HS: (i) Elitism selection during memory consideration instead of random selection, (ii) dynamic exploration and exploitation operators in place of the original static operators, (iii) a self-adjusted bandwidth operator, and (iv) inclusion of additional termination criteria to reach faster convergence. Along with parallel computing, MHS effectively reduces the computational time of the proposed hybrid models. The proposed hybrid models are compared with standard statistical models across three different datasets commonly used in credit scoring studies. The computational results show that MHS-RF is most robust in terms of model performance, model explainability and computational time.


1990 ◽  
Vol 20 (4) ◽  
pp. 428-437 ◽  
Author(s):  
Peter Kourtz

Articicial intelligence is a new science that deals with the representation, automatic acquisition, and use of knowledge. Artificial intelligence programs attempt to emulate human thought processes such as deduction, inference, language, and visual recognition. The goal of artificial intelligence is to make computers more useful for reasoning, planning, acting, and communicating with humans. Development of artificial intelligence applications involves the integration of advanced computer science, psychology, and sometimes robotics. Of the subfields that artificial intelligence can be broken into, the one of most immediate interest to forest management is expert systems. Expert systems involve encoding knowledge usually derived from an expert in a narrow subject area and using this knowledge to mimic his decision making. The knowledge is represented usually in the form of facts and rules, involving symbols such as English words. At the core of these systems is a mechanism that automatically searches for and pieces together the facts and rules necessary to solve a specific problem. Small expert systems can be developed on common microcomputers using existing low-cost commercial expert shells. Shells are general expert systems empty of knowledge. The user merely defines the solution structure and adds the desired knowledge. Larger systems usually require integration with existing forestry data bases and models. Their development requires either the relatively expensive expert system development tool kits or the use of one of the artificial intelligence development languages such as lisp or PROLOG. Large systems are expensive to develop, require a high degree of skill in knowledge engineering and computer science, and can require years of testing and modification before they become operational. Expert systems have a major role in all aspects of Canadian forestry. They can be used in conjunction with conventional process models to add currently lacking expert knowledge or as pure knowledge-based systems to solve problems never before tackled. They can preserve and accumulate forestry knowledge by encoding it. Expert systems allow us to package our forestry knowlege into a transportable and saleable product. They are a means to ensure consistent application of policies and operational procedures. There is a sense of urgency associated with the integration of artificial intelligence tools into Canadian forestry. Canada must awaken to the potential of this technology. Such systems are essential to improve industrial efficiency. A possible spin-off will be a resource knowledge business that can market our forestry knowledge worldwide. If we act decisively, we can easily compete with other countries such as Japan to fill this niche. A consortium of resource companies, provincial resource agencies, universities, and federal government laboratories is required to advance this goal.


2019 ◽  
Vol 28 (05) ◽  
pp. 1950017 ◽  
Author(s):  
Guotai Chi ◽  
Mohammad Shamsu Uddin ◽  
Mohammad Zoynul Abedin ◽  
Kunpeng Yuan

Credit risk prediction is essential for banks and financial institutions as it helps them to evade any inappropriate assessments that can lead to wasted opportunities or monetary losses. In recent times, the hybrid prediction model, a combination of traditional and modern artificial intelligence (AI) methods that provides better prediction capacity than the use of single techniques, has been introduced. Similarly, using conventional and topical artificial intelligence (AI) technologies, researchers have recommended hybrid models which amalgamate logistic regression (LR) with multilayer perceptron (MLP). To investigate the efficiency and viability of the proposed hybrid models, we compared 16 hybrid models created by combining logistic regression (LR), discriminant analysis (DA), and decision trees (DT) with four types of neural network (NN): adaptive neurofuzzy inference systems (ANFISs), deep neural networks (DNNs), radial basis function networks (RBFs) and multilayer perceptrons (MLPs). The experimental outcome, investigation, and statistical examination express the capacity of the planned hybrid model to develop a credit risk prediction technique different from all other approaches, as indicated by ten different performance measures. The classifier was authenticated on five real-world credit scoring data sets.


2020 ◽  
Vol 35 (1) ◽  
pp. 299-308 ◽  
Author(s):  
Xinhua Liu ◽  
Kanghui Zhou ◽  
Yu Lan ◽  
Xu Mao ◽  
Robert J. Trapp

Abstract It is argued here that even with the development of objective algorithms, convection-allowing numerical models, and artificial intelligence/machine learning, conceptual models will still be useful for forecasters until all these methods can fully satisfy the forecast requirements in the future. Conceptual models can help forecasters form forecast ideas quickly. They also can make up for the deficiencies of the numerical model and other objective methods. Furthermore, they can help forecasters understand the weather, and then help the forecasters lock in on the key features affecting the forecast as soon as possible. Ultimately, conceptual models can help the forecaster serve the end users faster, and better understand the forecast results during the service process. Based on the above considerations, construction of new conceptual models should have the following characteristics: 1) be guided by purpose, 2) focus on improving the ability of forecasters, 3) have multiangle consideration, 4) have multiscale fusion, and 5) need to be tested and corrected continuously. The traditional conceptual models used for forecasts of severe convective weather should be replaced gradually by new models that incorporate these principles.


Sign in / Sign up

Export Citation Format

Share Document