Sensitivity of Approximate Deconvolution Model Parameters in a Posteriori LES of Interfacial Turbulence

Author(s):  
Mahdi Saeedipour ◽  
Stéphane Vincent ◽  
Stefan Pirker
2020 ◽  
Author(s):  
Rudolf Debelak ◽  
Samuel Pawel ◽  
Carolin Strobl ◽  
Edgar C. Merkle

A family of score-based tests has been proposed in the past years for assessing the invariance of model parameters in several models of item response theory. These tests were originally developed in a maximum likelihood framework. This study aims to extend the theoretical framework of these tests to Bayesian maximum-a-posteriori estimates and to multiple group IRT models. We propose two families of statistical tests, which are based on a) an approximation using a pooled variance method, or b) a simulation-based approach based on asymptotic results. The resulting tests were evaluated by a simulation study, which investigated their sensitivity against differential item functioning with respect to a categorical or continuous person covariate in the two- and three-parametric logistic models. Whereas the method based on pooled variance was found to be practically useful with maximum likelihood as well as maximum-a-posteriori estimates, the simulation-based approach was found to require large sample sizes to lead to satisfactory results.


1998 ◽  
Vol 06 (01n02) ◽  
pp. 99-115 ◽  
Author(s):  
Purnima Ratilal ◽  
Peter Gerstoft ◽  
Joo Thiam Goh

Based on waveguide physics, a subspace inversion approach is proposed. It is observed that the ability to estimate a given parameter depends on its sensitivity to the acoustic wavefield, and this sensitivity depends on frequency. At low frequencies it is mainly the bottom parameters that are most sensitive and at high frequencies the geometric parameters are the most sensitive. Thus, the parameter vector to be determined is split into two subspaces, and only part of the data that is most influenced by the parameters in each subspace is used. The data sets from the Geoacoustic Inversion Workshop (June 1997) are inverted to demonstrate the approach. In each subspace Genetic Algorithms are used for the optimization — it provides flexibility to search over a wide range of parameters and also helps in selecting data sets to be used in the inversion. During optimization, the responses from many environmental parameter sets are computed in order to estimate the a posteriori probabilities of the model parameters. Thus the uniqueness and uncertainty of the model parameters are assessed. Using data from several frequencies to estimate a smaller subspace of parameters iteratively provides stability and greater accuracy in the estimated parameters.


Geophysics ◽  
2011 ◽  
Vol 76 (2) ◽  
pp. E45-E58 ◽  
Author(s):  
Mohammad S. Shahraeeni ◽  
Andrew Curtis

We have developed an extension of the mixture-density neural network as a computationally efficient probabilistic method to solve nonlinear inverse problems. In this method, any postinversion (a posteriori) joint probability density function (PDF) over the model parameters is represented by a weighted sum of multivariate Gaussian PDFs. A mixture-density neural network estimates the weights, mean vector, and covariance matrix of the Gaussians given any measured data set. In one study, we have jointly inverted compressional- and shear-wave velocity for the joint PDF of porosity, clay content, and water saturation in a synthetic, fluid-saturated, dispersed sand-shale system. Results show that if the method is applied appropriately, the joint PDF estimated by the neural network is comparable to the Monte Carlo sampled a posteriori solution of the inverse problem. However, the computational cost of training and using the neural network is much lower than inversion by sampling (more than a factor of 104 in this case and potentially a much larger factor for 3D seismic inversion). To analyze the performance of the method on real exploration geophysical data, we have jointly inverted P-wave impedance and Poisson’s ratio logs for the joint PDF of porosity and clay content. Results show that the posterior model PDF of porosity and clay content is a good estimate of actual porosity and clay-content log values. Although the results may vary from one field to another, this fast, probabilistic method of solving nonlinear inverse problems can be applied to invert well logs and large seismic data sets for petrophysical parameters in any field.


Geophysics ◽  
1993 ◽  
Vol 58 (4) ◽  
pp. 496-507 ◽  
Author(s):  
Mrinal K. Sen ◽  
Bimalendu B. Bhattacharya ◽  
Paul L. Stoffa

The resistivity interpretation problem involves the estimation of resistivity as a function of depth from the apparent resistivity values measured in the field as a function of electrode separation. This is commonly done either by curve matching using master curves or by more formal linearized inversion methods. The problems with linearized inversion schemes are fairly well known; they require that the starting model be close to the true solution. In this paper, we report the results from the application of a nonlinear global optimization method known as simulated annealing (SA) in the direct interpretation of resistivity sounding data. This method does not require a good starting model but is computationally more expensive. We used the heat bath algorithm of simulated annealing in which the mean square error (difference between observed and synthetic data) is used as the energy function that we attempt to minimize. Samples are drawn from the Gibbs probability distribution while the control parameter the temperature is slowly lowered, finally resulting in models that are very close to the globally optimal solutions. This method is also described in the framework of Bayesian statistics in which the Gibbs distribution is identified as the a posteriori probability density function in model space. Computation of the true posterior distribution requires computation of the energy function at each point in model space. However, a fairly good estimate of the most significant portion(s) of the function can be obtained from simulated annealing run in a reasonable computation time. This can be achieved by making several repeat runs of SA, each time starting with a new random number seed so that the most significant portion of the model space is adequately sampled. Once the posterior density function is known, many measures of dispersion can be made. In particular, we compute a mean model and the a posteriori covariance matrix. We have applied this method successfully to synthetic and field data. The resulting correlation covariance matrices indicate how the model parameters affect one another and are very useful in relating geology to the resulting resisitivity values.


1994 ◽  
Vol 02 (03) ◽  
pp. 251-266 ◽  
Author(s):  
PETER GERSTOFT

The data set from the Workshop on Acoustic Models in Signal Processing (May 1993) is inverted in order to find both the environmental parameters and the source position, Genetic algorithms are used for the optimization. When using genetic algorithms the responses from many environmental parameter sets are computed in order to estimate the solution. All these samples of the parameter space are used to estimate the a posteriori probabilities of the model parameters. Thus the uniqueness and uncertainty of the model parameters are assessed.


Current available visible explanation generating systems research to easily absolve a class prediction. Still, they may additionally point out visible parameters attribute which replicate a strong category prior, though the proof may additionally not clearly be in the pic. This is specifically regarding as alternatively such marketers fail in constructing have confidence with human users. We proposed our own version, which makes a speciality of the special places of house of the seen item, together predicts the category label & interprets why the expected label is proper for the image. The machine proposes to annotate the images automatically using the Markov cache model. To annotate images, principles are represented as states through the usage of Hidden Markov model. The model parameters were estimated as part of a set of images and manual annotations. This is a great collection of checks, albeit automatically, with the possibility a posteriori of the concepts presented in her.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6011 ◽  
Author(s):  
Jan Steinbrener ◽  
Konstantin Posch ◽  
Jürgen Pilz

We present a novel approach for training deep neural networks in a Bayesian way. Compared to other Bayesian deep learning formulations, our approach allows for quantifying the uncertainty in model parameters while only adding very few additional parameters to be optimized. The proposed approach uses variational inference to approximate the intractable a posteriori distribution on basis of a normal prior. By representing the a posteriori uncertainty of the network parameters per network layer and depending on the estimated parameter expectation values, only very few additional parameters need to be optimized compared to a non-Bayesian network. We compare our approach to classical deep learning, Bernoulli dropout and Bayes by Backprop using the MNIST dataset. Compared to classical deep learning, the test error is reduced by 15%. We also show that the uncertainty information obtained can be used to calculate credible intervals for the network prediction and to optimize network architecture for the dataset at hand. To illustrate that our approach also scales to large networks and input vector sizes, we apply it to the GoogLeNet architecture on a custom dataset, achieving an average accuracy of 0.92. Using 95% credible intervals, all but one wrong classification result can be detected.


1990 ◽  
Vol 2 (2) ◽  
pp. 216-225 ◽  
Author(s):  
Reza Shadmehr ◽  
David Z. D'Argenio

The feasibility of developing a neural network to perform nonlinear Bayesian estimation from sparse data is explored using an example from clinical pharmacology. The problem involves estimating parameters of a dynamic model describing the pharmacokinetics of the bronchodilator theophylline from limited plasma concentration measurements of the drug obtained in a patient. The estimation performance of a backpropagation trained network is compared to that of the maximum likelihood estimator as well as the maximum a posteriori probability estimator. In the example considered, the estimator prediction errors (model parameters and outputs) obtained from the trained neural network were similar to those obtained using the nonlinear Bayesian estimator.


Sign in / Sign up

Export Citation Format

Share Document