scholarly journals Efficient sequential experimental design for surrogate modeling of nested codes

2019 ◽  
Vol 23 ◽  
pp. 245-270 ◽  
Author(s):  
Sophie Marque-Pucheu ◽  
Guillaume Perrin ◽  
Josselin Garnier

In this paper we consider two nested computer codes, with the first code output as one of the second code inputs. A predictor of this nested code is obtained by coupling the Gaussian predictors of the two codes. This predictor is non Gaussian and computing its statistical moments can be cumbersome. Sequential designs aiming at improving the accuracy of the nested predictor are proposed. One of the criteria allows to choose which code to launch by taking into account the computational costs of the two codes. Finally, two adaptations of the non Gaussian predictor are proposed in order to compute the prediction mean and variance rapidly or exactly.

Author(s):  
F Lei ◽  
XP Xie ◽  
XW Wang ◽  
YG Wang

In this article, procedure and efficiency of the reduced-basis approach in structural design computation are studied. As a model order reduction approach, it provides fast evaluation of a structural system in explicitly parameterized formulation. Theoretically, the original structural system is reduced to obtain a reduced system by being projected onto a lower dimensional subspace. However, in practice, it is a time-consuming process due to the iterations of adaptive procedure in subspace construction. To improve the efficiency of the method, some characteristics are analyzed. First, the accuracy of the subspace is evaluated and computational costs of procedures with different approaches are studied. Results show that the subspaces constructed by greedy adaptive procedures with different beginnings have the same accuracy. It is instructive that accuracy of the subspace is guaranteed by adaptive procedure. And the computational costs depend on the number of iterations in adaptive procedure. Thus, a modified adaptive procedure is proposed to reduce the computational costs and guarantee the accuracy. The modified adaptive procedure begins with experimental design methods to obtain a set of samples rather than a single sample and ends with the adaptive procedure. The start set of samples are selected by the following experimental design methods: 2 k factorial design, standard Latin design and Latin hypercube design. By being integrated with the experimental design, the modified adaptive procedure saves computational costs and retains the same accuracy as traditional procedure does. As an example, the outputs of a vehicle body front compartment subjected to a bending load are illustrated. It is proved that the proposed procedure is efficient and is applicable to many other structural design contexts.


2006 ◽  
Vol 13 (04) ◽  
pp. 383-392 ◽  
Author(s):  
Fabio Dell'Anno ◽  
Silvio De Siena ◽  
Fabrizio Illuminati

We investigate the efficiency of inseparability criteria in detecting the entanglement properties of two-mode non-Gaussian states of the electromagnetic field. We focus our study on the relevant class of two-mode squeezed number states. These states combine the entangling capability of two-mode squeezers with the non-Gaussian character (nonclassicality) of number states. They allow for some exact analytical treatments, and include as a particular case the two-mode Gaussian squeezed vacuum. We show that the generalized PPT criterion recently proposed by Shchukin and Vogel, based on higher order statistical moments, is very efficient in detecting the entanglement for this class of non-Gaussian states.


2013 ◽  
Vol 26 (3) ◽  
pp. 1063-1083 ◽  
Author(s):  
Maxime Perron ◽  
Philip Sura

Abstract A common assumption in the earth sciences is the Gaussianity of data over time. However, several independent studies in the past few decades have shown this assumption to be mostly false. To be able to study non-Gaussian climate statistics, one must first compile a systematic climatology of the higher statistical moments (skewness and kurtosis; the third and fourth central statistical moments, respectively). Sixty-two years of daily data from the NCEP–NCAR Reanalysis I project are analyzed. The skewness and kurtosis of the data are found at each spatial grid point for the entire time domain. Nine atmospheric variables were chosen for their physical and dynamical relevance in the climate system: geopotential height, relative vorticity, quasigeostrophic potential vorticity, zonal wind, meridional wind, horizontal wind speed, vertical velocity in pressure coordinates, air temperature, and specific humidity. For each variable, plots of significant global skewness and kurtosis are shown for December–February and June–August at a specified pressure level. Additionally, the statistical moments are then zonally averaged to show the vertical dependence of the non-Gaussian statistics. This is a more comprehensive look at non-Gaussian atmospheric statistics than has been taken in previous studies on this topic.


2013 ◽  
Vol 2 (1) ◽  
pp. 59
Author(s):  
C. Kasmi ◽  
M. Hélier ◽  
M. Darces ◽  
E. Prouff

Modelling the power-grid network is of fundamental interest to analyse the conducted propagation of unintentional and intentional electromagnetic interferences. The propagation is indeed highly influenced by the channel behaviour. In this paper, we investigate the effects of appliances and the position of cables in a low voltage network. First, the power-grid architecture is described. Then, the principle of Experimental Design is recalled. Next, the methodology is applied to power-grid modelling. Finally, we propose an analysis of the statistical moments of the experimental design results. Several outcomes are provided to describe the effects induced by parameter variability on the conducted propagation of spurious compromising emanations.


2013 ◽  
Vol 9 (6) ◽  
pp. 20130902 ◽  
Author(s):  
Caleb E. Strait ◽  
Benjamin Y. Hayden

While standard models of risky choice account for the first and second statistical moments of reward outcome distributions (mean and variance, respectively), they often ignore the third moment, skewness. Determining a decision-maker's attitude about skewness is useful because it can help constrain process models of the mental steps involved in risky choice. We measured three rhesus monkeys’ preferences for gambles whose outcome distributions had almost identical means and variances but differed in skewness. We tested five distributions of skewness: strong negative, weak negative, normal, weak positive and strong positive. Monkeys preferred positively skewed gambles to negatively skewed ones and preferred strongly skewed and normal (i.e. unskewed) gambles to weakly skewed ones. This pattern of preferences cannot be explained solely by monotonic deformations of the utility curve or any other popular single account, but can be accounted for by multiple interacting factors.


2017 ◽  
Vol 19 (3) ◽  
pp. 282-292 ◽  
Author(s):  
Stefan Buhl ◽  
Dominik Hain ◽  
Frank Hartmann ◽  
Christian Hasse

Due to their capability to capture cycle-to-cycle variations and sporadically occurring phenomena such as misfire and knock, scale-resolving simulations are becoming more and more important for internal combustion engine simulations. Compared to the frequently used unsteady Reynolds-averaged Navier-Stokes approaches, scale-resolving simulations require significantly greater computational costs due to their high spatial and temporal resolution as well as the need to compute several cycles to obtain sufficient statistics. It is well established that the appropriate treatment of boundary conditions is crucial in scale-resolving simulations and both temporally and spatially resolved fluctuations must be prescribed. However, different port modeling strategies can be found in the literature, especially with respect to the extent of the computational domain (boundary close to the flange vs. the entire system up to the plenum) and the numerical treatment of the intake/exhaust when the valves are closed (enabled vs. disabled). This study compares three different port modeling strategies, namely a long ports version, a short ports version and a version with short and temporarily disabled ports based on the well-established Darmstadt benchmark engine. The aim is to identify the requirements for scale-resolving simulations in terms of the treatment of the intake and the exhaust ports to obtain accurate statistics (mean and variance) and cycle-to-cycle variations of the in-cylinder flow field.


2011 ◽  
Vol 139 (12) ◽  
pp. 3964-3973 ◽  
Author(s):  
Jing Lei ◽  
Peter Bickel

Abstract The ensemble Kalman filter is now an important component of ensemble forecasting. While using the linear relationship between the observation and state variables makes it applicable for large systems, relying on linearity introduces nonnegligible bias since the true distribution will never be Gaussian. This paper analyzes the bias of the ensemble Kalman filter from a statistical perspective and proposes a debiasing method called the nonlinear ensemble adjustment filter. This new filter transforms the forecast ensemble in a statistically principled manner so that the updated ensemble has the desired mean and variance. It is also easily localizable and, hence, potentially useful for large systems. Its performance is demonstrated and compared with other Kalman filter and particle filter variants through various experiments on the Lorenz-63 and Lorenz-96 systems. The results show that the new filter is stable and accurate for challenging situations such as nonlinear, high-dimensional systems with sparse observations.


Sign in / Sign up

Export Citation Format

Share Document