Adaptive Surrogate-Model Fitting Using Error Monotonicity

Author(s):  
John Steuben ◽  
Cameron Turner

Surrogate models are useful in a wide variety of engineering applications. The employment of these computationally efficient surrogates for complex physical models offers a dramatic reduction in the computational effort required to conduct analyses for the purpose of engineering design. In order to realize this advantage, it is necessary to “fit” the surrogate model to the underlying physical model. This is a considerable challenge as the physical model may consist of many design variables and performance indices, exhibit nonlinear and/or mixed-discrete behaviors, and is typically expensive to evaluate. As a result adaptive sequential sampling techniques, where previous evaluations of the physical model dictate subsequent sample locations, are widely used. In this work, we develop and demonstrate a novel adaptive sequential sampling algorithm for fitting surrogate models of any type, with a focus on large data sets. By examining the monotonicity of an error function the design space is repeatedly partitioned in order to compute a set of “key points.” The key points reduce the problem of fitting to one of precise interpolation, which can be accomplished using well-known methods. We demonstrate the use of this technique to fit several surrogate model types, including blended Hermitian polynomials and Non-Uniform Rational B-splines (NURBs), to nonlinear noisy data. We conclude with our observations as to the effectiveness of this fitting technique, its strengths and limitations, as well as a discussion of further work in this vein.

2020 ◽  
Author(s):  
Daniel Erdal ◽  
Sinan Xiao ◽  
Wolfgang Nowak ◽  
Olaf A. Cirpka

<p>Global sensitivity analysis and uncertainty quantification of nonlinear models may be performed using ensembles of model runs. However, already in moderately complex models many combinations of parameters, which appear reasonable by prior knowledge, can lead to unrealistic model outcomes, like perennial rivers that fall dry in the model or simulated severe floodings that have not been observed in the real system. We denote these parameter combinations with implausible outcome as “non-behavior”. Creating a sufficiently large ensemble of behavioral model realizations can be computationally prohibitive, if the individual model runs are expensive and only a small fraction of the parameter space is behavioral. In this work, we design a stochastic, sequential sampling engine that utilizes fast and simple surrogate models trained on past realizations of the original, complex model. Our engine uses the surrogate model to estimate whether a candidate realization will turn out to be behavioral or not. Only parameter sets that with a reasonable certainty of being behavioral (as predicted by the surrogate model) are simulated using the original, complex model. For a subsurface flow model of a small south-western German catchment, we can show high accuracy in the surrogate model predictions regarding the behavioral status of the parameter sets. This increases the fraction of behavioral model runs (actually computed with the original, complex model) over total complex-models runs to 20-90%, compared to 0.1% without our method (e.g., using brute-force Monte Carlo sampling).  This notable performance increase depends on the choice of surrogate modeling technique. Towards this end, we consider both Gaussian Process Emulation (GPE) and models based on polynomials of active variables determined by Active Subspace decomposition as surrogate models. For the GPE-based surrogate model, we also compare random search and active learning strategies for the training of the surrogate model.</p>


Water ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 458
Author(s):  
Drew C. Baird ◽  
Benjamin Abban ◽  
S. Michael Scurlock ◽  
Steven B. Abt ◽  
Christopher I. Thornton

While there are a wide range of design recommendations for using rock vanes and bendway weirs as streambank protection measures, no comprehensive, standard approach is currently available for design engineers to evaluate their hydraulic performance before construction. This study investigates using 2D numerical modeling as an option for predicting the hydraulic performance of rock vane and bendway weir structure designs for streambank protection. We used the Sedimentation and River Hydraulics (SRH)-2D depth-averaged numerical model to simulate flows around rock vane and bendway weir installations that were previously examined as part of a physical model study and that had water surface elevation and velocity observations. Overall, SRH-2D predicted the same general flow patterns as the physical model, but over- and underpredicted the flow velocity in some areas. These over- and underpredictions could be primarily attributed to the assumption of negligible vertical velocities. Nonetheless, the point differences between the predicted and observed velocities generally ranged from 15 to 25%, with some exceptions. The results showed that 2D numerical models could provide adequate insight into the hydraulic performance of rock vanes and bendway weirs. Accordingly, design guidance and implications of the study results are presented for design engineers.


Author(s):  
Kevin Cremanns ◽  
Dirk Roos ◽  
Simon Hecker ◽  
Peter Dumstorff ◽  
Henning Almstedt ◽  
...  

The demand for energy is increasingly covered through renewable energy sources. As a consequence, conventional power plants need to respond to power fluctuations in the grid much more frequently than in the past. Additionally, steam turbine components are expected to deal with high loads due to this new kind of energy management. Changes in steam temperature caused by rapid load changes or fast starts lead to high levels of thermal stress in the turbine components. Therefore, todays energy market requires highly efficient power plants which can be operated under flexible conditions. In order to meet the current and future market requirements, turbine components are optimized with respect to multi-dimensional target functions. The development of steam turbine components is a complex process involving different engineering disciplines and time-consuming calculations. Currently, optimization is used most frequently for subtasks within the individual discipline. For a holistic approach, highly efficient calculation methods, which are able to deal with high dimensional and multidisciplinary systems, are needed. One approach to solve this problem is the usage of surrogate models using mathematical methods e.g. polynomial regression or the more sophisticated Kriging. With proper training, these methods can deliver results which are nearly as accurate as the full model calculations themselves in a fraction of time. Surrogate models have to face different requirements: the underlying outputs can be, for example, highly non-linear, noisy or discontinuous. In addition, the surrogate models need to be constructed out of a large number of variables, where often only a few parameters are important. In order to achieve good prognosis quality only the most important parameters should be used to create the surrogate models. Unimportant parameters do not improve the prognosis quality but generate additional noise to the approximation result. Another challenge is to achieve good results with as little design information as possible. This is important because in practice the necessary information is usually only obtained by very time-consuming simulations. This paper presents an efficient optimization procedure using a self-developed hybrid surrogate model consisting of moving least squares and anisotropic Kriging. With its maximized prognosis quality, it is capable of handling the challenges mentioned above. This enables time-efficient optimization. Additionally, a preceding sensitivity analysis identifies the most important parameters regarding the objectives. This leads to a fast convergence of the optimization and a more accurate surrogate model. An example of this method is shown for the optimization of a labyrinth shaft seal used in steam turbines. Within the optimization the opposed objectives of minimizing leakage mass flow and decreasing total enthalpy increase due to friction are considered.


2011 ◽  
Vol 90-93 ◽  
pp. 2363-2371
Author(s):  
Bin Wei Xia ◽  
Ke Hu ◽  
Yi Yu Lu ◽  
Dan Li ◽  
Zu Yong Zhou

Physical models of layered rock mass with different dip angles are built by physical model test in accordance with the bias failure characteristics of surrounding rocks of layered rock mass in Gonghe Tunnel. Bias failure characteristics of surrounding rocks in thin-layered rock mass and influences of layered rock mass dip angle on stability of tunnel are studied. The research results show that failure characteristics of physical models generally coincide with those of surrounding rocks monitored from the tunnel site. The failure regions of surrounding rock perpendicular to the stratification planes are obviously larger than those parallel to. The stress distributions and failure characteristics in the surrounding rocks are similar to each physical model of different dip angles. The stress distributions and failure regions are all elliptic in shape, in which the major axis is in the direction perpendicular to the stratification planes while the minor axis is parallel to them. As a result, obvious bias failure of surrounding rocks has gradually formed. The physical model tests provide reliable basis for theoretical analysis on the failure mechanism of deep-buried layered rock mass.


2021 ◽  
Author(s):  
Maha Mdini ◽  
Takemasa Miyoshi ◽  
Shigenori Otsuka

<p>In the era of modern science, scientists have developed numerical models to predict and understand the weather and ocean phenomena based on fluid dynamics. While these models have shown high accuracy at kilometer scales, they are operated with massive computer resources because of their computational complexity.  In recent years, new approaches to solve these models based on machine learning have been put forward. The results suggested that it be possible to reduce the computational complexity by Neural Networks (NNs) instead of classical numerical simulations. In this project, we aim to shed light upon different ways to accelerating physical models using NNs. We test two approaches: Data-Driven Statistical Model (DDSM) and Hybrid Physical-Statistical Model (HPSM) and compare their performance to the classical Process-Driven Physical Model (PDPM). DDSM emulates the physical model by a NN. The HPSM, also known as super-resolution, uses a low-resolution version of the physical model and maps its outputs to the original high-resolution domain via a NN. To evaluate these two methods, we measured their accuracy and their computation time. Our results of idealized experiments with a quasi-geostrophic model [SO3] show that HPSM reduces the computation time by a factor of 3 and it is capable to predict the output of the physical model at high accuracy up to 9.25 days. The DDSM, however, reduces the computation time by a factor of 4 and can predict the physical model output with an acceptable accuracy only within 2 days. These first results are promising and imply the possibility of bringing complex physical models into real time systems with lower-cost computer resources in the future.</p>


2015 ◽  
Vol 27 (6) ◽  
pp. 1186-1222 ◽  
Author(s):  
Bryan P. Tripp

Because different parts of the brain have rich interconnections, it is not possible to model small parts realistically in isolation. However, it is also impractical to simulate large neural systems in detail. This article outlines a new approach to multiscale modeling of neural systems that involves constructing efficient surrogate models of populations. Given a population of neuron models with correlated activity and with specific, nonrandom connections, a surrogate model is constructed in order to approximate the aggregate outputs of the population. The surrogate model requires less computation than the neural model, but it has a clear and specific relationship with the neural model. For example, approximate spike rasters for specific neurons can be derived from a simulation of the surrogate model. This article deals specifically with neural engineering framework (NEF) circuits of leaky-integrate-and-fire point neurons. Weighted sums of spikes are modeled by interpolating over latent variables in the population activity, and linear filters operate on gaussian random variables to approximate spike-related fluctuations. It is found that the surrogate models can often closely approximate network behavior with orders-of-magnitude reduction in computational demands, although there are certain systematic differences between the spiking and surrogate models. Since individual spikes are not modeled, some simulations can be performed with much longer steps sizes (e.g., 20 ms). Possible extensions to non-NEF networks and to more complex neuron models are discussed.


2014 ◽  
Vol 905 ◽  
pp. 348-352 ◽  
Author(s):  
Nuryazmeen Farhan Haron ◽  
Wardah Tahir

This paper reviews the physical models that had been used in order to conduct the experiment of estuarine salinity intrusion into rivers. Several studies used the physical models to get a better understanding of the estuary salinity mixing process and salt-wedge estuary characteristics along the flume. Besides, the laboratory investigations using the physical model also useful for verification purposes as discussed by previous researchers.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5332
Author(s):  
Carlos A. Duchanoy ◽  
Hiram Calvo ◽  
Marco A. Moreno-Armendáriz

Surrogate Modeling (SM) is often used to reduce the computational burden of time-consuming system simulations. However, continuous advances in Artificial Intelligence (AI) and the spread of embedded sensors have led to the creation of Digital Twins (DT), Design Mining (DM), and Soft Sensors (SS). These methodologies represent a new challenge for the generation of surrogate models since they require the implementation of elaborated artificial intelligence algorithms and minimize the number of physical experiments measured. To reduce the assessment of a physical system, several existing adaptive sequential sampling methodologies have been developed; however, they are limited in most part to the Kriging models and Kriging-model-based Monte Carlo Simulation. In this paper, we integrate a distinct adaptive sampling methodology to an automated machine learning methodology (AutoML) to help in the process of model selection while minimizing the system evaluation and maximizing the system performance for surrogate models based on artificial intelligence algorithms. In each iteration, this framework uses a grid search algorithm to determine the best candidate models and perform a leave-one-out cross-validation to calculate the performance of each sampled point. A Voronoi diagram is applied to partition the sampling region into some local cells, and the Voronoi vertexes are considered as new candidate points. The performance of the sample points is used to estimate the accuracy of the model for a set of candidate points to select those that will improve more the model’s accuracy. Then, the number of candidate models is reduced. Finally, the performance of the framework is tested using two examples to demonstrate the applicability of the proposed method.


2017 ◽  
Vol 20 (1) ◽  
pp. 164-176 ◽  
Author(s):  
Vasileios Christelis ◽  
Rommel G. Regis ◽  
Aristotelis Mantoglou

Abstract The computationally expensive variable density and salt transport numerical models hinder the implementation of simulation-optimization routines for coastal aquifer management. To reduce the computational cost, surrogate models have been utilized in pumping optimization of coastal aquifers. However, it has not been previously addressed whether surrogate modelling is effective given a limited number of numerical simulations with the seawater intrusion model. To that end, two surrogate-based optimization (SBO) frameworks are employed and compared against the direct optimization approach, under restricted computational budgets. The first, a surrogate-assisted algorithm, employs a strategy which aims at a fast local improvement of the surrogate model around optimal values. The other, balances global and local improvement of the surrogate model and is applied for the first time in coastal aquifer management. The performance of the algorithms is investigated for optimization problems of moderate and large dimensionalities. The statistical analysis indicates that for the specified computational budgets, the sample means of the SBO methods are statistically significantly better than those of the direct optimization. Additionally, the selection of cubic radial basis functions as surrogate models, enables the construction of very fast approximations for problems with up to 40 decision variables and 40 constraint functions.


Author(s):  
Jefferson Silva Barbosa ◽  
Leonardo Campanine Sicchieri ◽  
Arinan Dourado ◽  
Aldemir Ap. Cavalini Jr. ◽  
Valder Steffen Jr

Abstract The mathematical modeling of journal bearings has advanced significantly since the Reynolds equation was first proposed. Advances in the processing capacity of computers and numerical techniques led to multi-physical models that are able to describe the behavior of hydrodynamic bearings. However, many researchers prefer to apply simple models of these components in rotor-bearing analyses due to the computational effort that complex models require. Surrogate modeling techniques are statistical procedures that can be applied to represent complex models. In the present work, Kriging models are formulated to substitute the thermohydrodynamic (THD) models of three different bearings found in a Francis hydropower unit, namely a cylindrical journal (CJ) bearing, a tilting-pad journal bearing (TPJ) bearing, and a tilting-pad thrust (TPT) bearing. The results determined by using the proposed approach reveal that Kriging models can be satisfactorily used as surrogate THD-models of hydrodynamic bearings.


Sign in / Sign up

Export Citation Format

Share Document