scholarly journals Sampling strategy for fuzzy numbers in the context of surrogate models

2021 ◽  
Vol 3 (11) ◽  
Author(s):  
Thomas Oberleiter ◽  
Kai Willner

AbstractThe paper presents an investigation of the accuracy of surrogate models for systems with uncertainties, where the uncertain parameters are represented by fuzzy numbers. Since the underlying fuzzy arithmetic using $$\alpha$$ α -level optimisation requires a large number of system evaluations, the use of numerically expensive systems becomes prohibitive with a higher number of fuzzy parameters. However, this problem can be overcome by employing less expensive surrogate models, where the accuracy of the surrogate depends strongly on the choice of the sampling points. In order to find a sufficiently accurate surrogate model with as few as possible sampling points, the influence of various sampling strategies on the accuracy of the fuzzy evaluation is investigated. As well suited for fuzzy systems, the newly developed Fuzzy Oriented Sampling Shift method is presented and compared with established sampling strategies. For the surrogate models radial basis functions and a Kriging model are employed. As test cases, the Branin and the Camelback function with fuzzy parameters are used, which demonstrate the varying accuracy for different sampling strategies. A more application oriented example of a finite element simulation of a deep drawing process is given in the end.

Mathematics ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 149
Author(s):  
Yaohui Li ◽  
Jingfang Shen ◽  
Ziliang Cai ◽  
Yizhong Wu ◽  
Shuting Wang

The kriging optimization method that can only obtain one sampling point per cycle has encountered a bottleneck in practical engineering applications. How to find a suitable optimization method to generate multiple sampling points at a time while improving the accuracy of convergence and reducing the number of expensive evaluations has been a wide concern. For this reason, a kriging-assisted multi-objective constrained global optimization (KMCGO) method has been proposed. The sample data obtained from the expensive function evaluation is first used to construct or update the kriging model in each cycle. Then, kriging-based estimated target, RMSE (root mean square error), and feasibility probability are used to form three objectives, which are optimized to generate the Pareto frontier set through multi-objective optimization. Finally, the sample data from the Pareto frontier set is further screened to obtain more promising and valuable sampling points. The test results of five benchmark functions, four design problems, and a fuel economy simulation optimization prove the effectiveness of the proposed algorithm.


2021 ◽  
pp. 146808742110652
Author(s):  
Jian Tang ◽  
Anuj Pal ◽  
Wen Dai ◽  
Chad Archer ◽  
James Yi ◽  
...  

Engine knock is an undesirable combustion that could damage the engine mechanically. On the other hand, it is often desired to operate the engine close to its borderline knock limit to optimize combustion efficiency. Traditionally, borderline knock limit is detected by sweeping tests of related control parameters for the worst knock, which is expensive and time consuming, and also, the detected borderline knock limit is often used as a feedforward control without considering its stochastic characteristics without compensating current engine operational condition and type of fuel used. In this paper, stochastic Bayesian optimization method is used to obtain a tradeoff between stochastic knock intensity and fuel economy. The log-nominal distribution of knock intensity signal is converted to Gaussian one using a proposed map to satisfy the assumption for Kriging model development. Both deterministic and stochastic Kriging surrogate models are developed based on test data using the Bayesian iterative optimization process. This study focuses on optimizing two competing objectives, knock intensity and indicated specific fuel consumption using two control parameters: spark and intake valve timings. Test results at two different operation conditions show that the proposed learning algorithm not only reduces required time and cost for predicting knock borderline but also provides control parameters, based on trained surrogate models and the corresponding Pareto front, with the best fuel economy possible.


Author(s):  
David A. Romero ◽  
Cristina H. Amon ◽  
Susan Finger

In order to reduce the time and resources devoted to design-space exploration during simulation-based design and optimization, the use of surrogate models, or metamodels, has been proposed in the literature. Key to the success of metamodeling efforts are the experimental design techniques used to generate the combinations of input variables at which the computer experiments are conducted. Several adaptive sampling techniques have been proposed to tailor the experimental designs to the specific application at hand, using the already-acquired data to guide further exploration of the input space, instead of using a fixed sampling scheme defined a priori. Though mixed results have been reported, it has been argued that adaptive sampling techniques can be more efficient, yielding better surrogate models with less sampling points. In this paper, we address the problem of adaptive sampling for single and multi-response metamodels, with a focus on Multi-stage Multi-response Bayesian Surrogate Models (MMBSM). We compare distance-optimal latin hypercube sampling, an entropy-based criterion and the maximum cross-validation variance criterion, originally proposed for one-dimensional output spaces and implemented in this paper for multi-dimensional output spaces. Our results indicate that, both for single and multi-response surrogate models, the entropy-based adaptive sampling approach leads to models that are more robust to the initial experimental design and at least as accurate (or better) when compared with other sampling techniques using the same number of sampling points.


2007 ◽  
Vol 4 (3) ◽  
pp. 1069-1094
Author(s):  
M. Rivas-Casado ◽  
S. White ◽  
P. Bellamy

Abstract. River restoration appraisal requires the implementation of monitoring programmes that assess the river site before and after the restoration project. However, little work has yet been developed to design effective and efficient sampling strategies. Three main variables need to be considered when designing monitoring programmes: space, time and scale. The aim of this paper is to describe the methodology applied to analyse the variation of depth in space, scale and time so more comprehensive monitoring programmes can be developed. Geostatistical techniques were applied to study the spatial dimension (sampling strategy and density), spectral analysis was used to study the scale at which depth shows cyclic patterns, whilst descriptive statistics were used to assess the temporal variation. A brief set of guidelines have been summarised in the conclusion.


2022 ◽  
Vol 11 (1) ◽  
pp. 0-0

In this study, a fuzzy cooperative continuous static game (PQFCCSG) with n players having fuzzy parameters in all of the cost functions and the right- hand-side of constraints is characterized. Their fuzzy parameters are represented by piecewise quadratic fuzzy numbers. The α-pareto optimal solution concept is specified. In addition, the stability sets of the first and second kind without differentiability are conceptualized and established. An illustrated numerical example is discussed for proper understanding and interpretation of the proposed concept.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5332
Author(s):  
Carlos A. Duchanoy ◽  
Hiram Calvo ◽  
Marco A. Moreno-Armendáriz

Surrogate Modeling (SM) is often used to reduce the computational burden of time-consuming system simulations. However, continuous advances in Artificial Intelligence (AI) and the spread of embedded sensors have led to the creation of Digital Twins (DT), Design Mining (DM), and Soft Sensors (SS). These methodologies represent a new challenge for the generation of surrogate models since they require the implementation of elaborated artificial intelligence algorithms and minimize the number of physical experiments measured. To reduce the assessment of a physical system, several existing adaptive sequential sampling methodologies have been developed; however, they are limited in most part to the Kriging models and Kriging-model-based Monte Carlo Simulation. In this paper, we integrate a distinct adaptive sampling methodology to an automated machine learning methodology (AutoML) to help in the process of model selection while minimizing the system evaluation and maximizing the system performance for surrogate models based on artificial intelligence algorithms. In each iteration, this framework uses a grid search algorithm to determine the best candidate models and perform a leave-one-out cross-validation to calculate the performance of each sampled point. A Voronoi diagram is applied to partition the sampling region into some local cells, and the Voronoi vertexes are considered as new candidate points. The performance of the sample points is used to estimate the accuracy of the model for a set of candidate points to select those that will improve more the model’s accuracy. Then, the number of candidate models is reduced. Finally, the performance of the framework is tested using two examples to demonstrate the applicability of the proposed method.


2018 ◽  
Vol 78 (6) ◽  
pp. 1407-1416
Author(s):  
Santiago Sandoval ◽  
Jean-Luc Bertrand-Krajewski ◽  
Nicolas Caradot ◽  
Thomas Hofer ◽  
Günter Gruber

Abstract The event mean concentrations (EMCs) that would have been obtained by four different stormwater sampling strategies are simulated by using total suspended solids (TSS) and flowrate time series (about one minute time-step and one year of data). These EMCs are compared to the reference EMCs calculated by considering the complete time series. The sampling strategies are assessed with datasets from four catchments: (i) Berlin, Germany, combined sewer overflow (CSO); (ii) Graz, Austria, CSO; (iii) Chassieu, France, separate sewer system; and (iv) Ecully, France, CSO. A sampling strategy in which samples are collected at constant time intervals over the rainfall event and sampling volumes are pre-set as proportional to the runoff volume discharged between two consecutive sample leads to the most representative results. Recommended sampling time intervals are of 5 min for Berlin and Chassieu (resp. 100 and 185 ha area) and 10 min for Graz and Ecully (resp. 335 and 245 ha area), with relative sampling errors between 7% and 20% and uncertainties in sampling errors of about 5%. Uncertainties related to sampling volumes, TSS laboratory analyses and beginning/ending of rainstorm events are reported as the most influent sources in the uncertainties of sampling errors and EMCs.


Author(s):  
Naozumi Tsuda ◽  
David B. Bogy

This report addresses a new optimization method in which the DIRECT algorithm is used in conjunction with a surrogate model. The DIRECT algorithm itself can find the global optimum with a high convergence rate. However the convergence rate can be much improved by coupling DIRECT with a surrogate model. The surrogate model known as the Kriging model is used in this research. It is determined by using sampling points generated by the DIRECT algorithm. This model expresses the shape of a hyper surface approximation of the cost function over the entire search space. Finding the optimum point on this hyper surface is very fast because it is not necessary to solve the time consuming air bearing equations. By using this optimum candidate as one of the DIRECT sampling points, we can eliminate many cost function evaluations. To illustrate the power of this approach we first present some simple optimization examples using known difficult functions. Then we determine the optimum design of a slider with 5nm flying height (FH) starting with a design that has a 7nm FH.


Sign in / Sign up

Export Citation Format

Share Document