scholarly journals Lossless, scalable implicit likelihood inference for cosmological fields

2021 ◽  
Vol 2021 (11) ◽  
pp. 049
Author(s):  
T. Lucas Makinen ◽  
Tom Charnock ◽  
Justin Alsing ◽  
Benjamin D. Wandelt

Abstract We present a comparison of simulation-based inference to full, field-based analytical inference in cosmological data analysis. To do so, we explore parameter inference for two cases where the information content is calculable analytically: Gaussian random fields whose covariance depends on parameters through the power spectrum; and correlated lognormal fields with cosmological power spectra. We compare two inference techniques: i) explicit field-level inference using the known likelihood and ii) implicit likelihood inference with maximally informative summary statistics compressed via Information Maximising Neural Networks (IMNNs). We find that a) summaries obtained from convolutional neural network compression do not lose information and therefore saturate the known field information content, both for the Gaussian covariance and the lognormal cases, b) simulation-based inference using these maximally informative nonlinear summaries recovers nearly losslessly the exact posteriors of field-level inference, bypassing the need to evaluate expensive likelihoods or invert covariance matrices, and c) even for this simple example, implicit, simulation-based likelihood incurs a much smaller computational cost than inference with an explicit likelihood. This work uses a new IMNN implementation in Jax that can take advantage of fully-differentiable simulation and inference pipeline. We also demonstrate that a single retraining of the IMNN summaries effectively achieves the theoretically maximal information, enhancing the robustness to the choice of fiducial model where the IMNN is trained.

2020 ◽  
Vol 223 (3) ◽  
pp. 1837-1863
Author(s):  
M C Manassero ◽  
J C Afonso ◽  
F Zyserman ◽  
S Zlotnik ◽  
I Fomin

SUMMARY Simulation-based probabilistic inversions of 3-D magnetotelluric (MT) data are arguably the best option to deal with the nonlinearity and non-uniqueness of the MT problem. However, the computational cost associated with the modelling of 3-D MT data has so far precluded the community from adopting and/or pursuing full probabilistic inversions of large MT data sets. In this contribution, we present a novel and general inversion framework, driven by Markov Chain Monte Carlo (MCMC) algorithms, which combines (i) an efficient parallel-in-parallel structure to solve the 3-D forward problem, (ii) a reduced order technique to create fast and accurate surrogate models of the forward problem and (iii) adaptive strategies for both the MCMC algorithm and the surrogate model. In particular, and contrary to traditional implementations, the adaptation of the surrogate is integrated into the MCMC inversion. This circumvents the need of costly offline stages to build the surrogate and further increases the overall efficiency of the method. We demonstrate the feasibility and performance of our approach to invert for large-scale conductivity structures with two numerical examples using different parametrizations and dimensionalities. In both cases, we report staggering gains in computational efficiency compared to traditional MCMC implementations. Our method finally removes the main bottleneck of probabilistic inversions of 3-D MT data and opens up new opportunities for both stand-alone MT inversions and multi-observable joint inversions for the physical state of the Earth’s interior.


2020 ◽  
Vol 143 (2) ◽  
Author(s):  
Kamrul Hasan Rahi ◽  
Hemant Kumar Singh ◽  
Tapabrata Ray

Abstract Real-world design optimization problems commonly entail constraints that must be satisfied for the design to be viable. Mathematically, the constraints divide the search space into feasible (where all constraints are satisfied) and infeasible (where at least one constraint is violated) regions. The presence of multiple constraints, constricted and/or disconnected feasible regions, non-linearity and multi-modality of the underlying functions could significantly slow down the convergence of evolutionary algorithms (EA). Since each design evaluation incurs some time/computational cost, it is of significant interest to improve the rate of convergence to obtain competitive solutions with relatively fewer design evaluations. In this study, we propose to accomplish this using two mechanisms: (a) more intensified search by identifying promising regions through “bump-hunting,” and (b) use of infeasibility-driven ranking to exploit the fact that optimal solutions are likely to be located on constraint boundaries. Numerical experiments are conducted on a range of mathematical benchmarks and empirically formulated engineering problems, as well as a simulation-based wind turbine design optimization problem. The proposed approach shows up to 53.48% improvement in median objective values and up to 69.23% reduction in cost of identifying a feasible solution compared with a baseline EA.


Author(s):  
Tong Zou ◽  
Sankaran Mahadevan ◽  
Akhil Sopory

A novel reliability-based design optimization (RBDO) method using simulation-based techniques for reliability assessments and efficient optimization approach is presented in this paper. In RBDO, model-based reliability analysis needs to be performed to calculate the probability of not satisfying a reliability constraint and the gradient of this probability with respect to each design variable. Among model-based methods, the most widely used in RBDO is the first-order reliability method (FORM). However, FORM could be inaccurate for nonlinear problems and is not applicable for system reliability problems. This paper develops an efficient optimization methodology to perform RBDO using simulation-based techniques. By combining analytical and simulation-based reliability methods, accurate probability of failure and sensitivity information is obtained. The use of simulation also enables both component and system-level reliabilities to be included in RBDO formulation. Instead of using a traditional RBDO formulation in which optimization and reliability computations are nested, a sequential approach is developed to greatly reduce the computational cost. The efficiency of the proposed RBDO approach is enhanced by using a multi-modal adaptive importance sampling technique for simulation-based reliability assessment; and by treating the inactive reliability constraints properly in optimization. A vehicle side impact problem is used to demonstrate the capabilities of the proposed method.


2019 ◽  
Vol 30 (01) ◽  
pp. 181-223 ◽  
Author(s):  
Lukas Herrmann ◽  
Kristin Kirchner ◽  
Christoph Schwab

We propose and analyze several multilevel algorithms for the fast simulation of possibly nonstationary Gaussian random fields (GRFs) indexed, for example, by the closure of a bounded domain [Formula: see text] or, more generally, by a compact metric space [Formula: see text] such as a compact [Formula: see text]-manifold [Formula: see text]. A colored GRF [Formula: see text], admissible for our algorithms, solves the stochastic fractional-order equation [Formula: see text] for some [Formula: see text], where [Formula: see text] is a linear, local, second-order elliptic self-adjoint differential operator in divergence form and [Formula: see text] is white noise on [Formula: see text]. We thus consider GRFs on [Formula: see text] with covariance operators of the form [Formula: see text]. The proposed algorithms numerically approximate samples of [Formula: see text] on nested sequences [Formula: see text] of regular, simplicial partitions [Formula: see text] of [Formula: see text] and [Formula: see text], respectively. Work and memory to compute one approximate realization of the GRF [Formula: see text] on the triangulation [Formula: see text] of [Formula: see text] with consistency [Formula: see text], for some consistency order [Formula: see text], scale essentially linearly in [Formula: see text], independent of the possibly low regularity of the GRF. The algorithms are based on a sinc quadrature for an integral representation of (the application of) the negative fractional-order elliptic “coloring” operator [Formula: see text] to white noise [Formula: see text]. For the proposed numerical approximation, we prove bounds of the computational cost and the consistency error in various norms.


2011 ◽  
Vol 27 (5) ◽  
pp. 933-956 ◽  
Author(s):  
Thomas Flury ◽  
Neil Shephard

We note that likelihood inference can be based on an unbiased simulation-based estimator of the likelihood when it is used inside a Metropolis–Hastings algorithm. This result has recently been introduced in statistics literature by Andrieu, Doucet, and Holenstein (2010, Journal of the Royal Statistical Society, Series B, 72, 269–342) and is perhaps surprising given the results on maximum simulated likelihood estimation. Bayesian inference based on simulated likelihood can be widely applied in microeconomics, macroeconomics, and financial econometrics. One way of generating unbiased estimates of the likelihood is through a particle filter. We illustrate these methods on four problems, producing rather generic methods. Taken together, these methods imply that if we can simulate from an economic model, we can carry out likelihood–based inference using its simulations.


Author(s):  
Baijnath Kaushik ◽  
◽  
Navdeep Kaur ◽  
Amit Kumar Kohli ◽  
◽  
...  

The objective of this paper is to present a novelmethod for achievingmaximumreliability in fault-tolerant optimal network design when networks have variable size. Reliability calculation is a most important and critical component when fault-tolerant optimal network design is required. A network must be supplied with certain parameters that guarantee proper functionality and maintainability in worse-case situations. Many alternative methods for measuring reliability have been stated in the literature for optimal network design. Most of these methods, mentioned in the literature for evaluating reliability, may be analytical and simulation-based. These methods provide significant ways for computing reliability when a network has a limited size. Significant computational effort is also required for growing variable-sized networks. A novel neural network method is therefore presented to achieve significant high reliability in fault-tolerant optimal network design in highly growing variable networks. This paper compares simulation-based analytical methods with improved learning rate gradient descent-based neural network methods. Results show that improved optimal network design with maximum reliability is achievable by a novel neural network at a manageable computational cost.


2020 ◽  
Author(s):  
Anna E. Sikorska-Senoner ◽  
Bettina Schaefli ◽  
Jan Seibert

Abstract. For extreme flood estimation, simulation-based approaches represent an interesting alternative to purely statistical approaches, particularly if hydrograph shapes are required. Such simulation-based methods are adapted within continuous simulation frameworks that rely on statistical analyses of continuous streamflow time series derived from a hydrologic model fed with long precipitation time series. These frameworks are, however, affected by high computational demands, particularly if floods with return periods > 1000 years are of interest or if modelling uncertainty due to different sources (meteorological input or hydrologic model) is to be quantified. Here, we propose three methods for reducing the computational requirements for the hydrological simulations for extreme flood estimation, so that long streamflow time series can be analysed at a reduced computational cost. These methods rely on simulation of annual maxima and on analyzing their simulated range to downsize the hydrological parameter ensemble to a small number suitable for continuous simulation frameworks. The methods are tested in a Swiss catchment with 10 000 years of synthetic streamflow data simulated with a weather generator. Our results demonstrate the reliability of the proposed downsizing methods for robust simulations of extreme floods with uncertainty. The methods are readily transferable to other situations where ensemble simulations are needed.


2018 ◽  
Author(s):  
Z. Faidon Brotzakis ◽  
Michele Parrinello

AbstractProtein conformational transitions often involve many slow degrees of freedom. Their knowledge would give distinctive advantages since it provides chemical and mechanistic insight and accelerates the convergence of enhanced sampling techniques that rely on collective variables. In this study, we implemented a recently developed variational approach to conformational dynamics metadynamics to the conformational transition of the moderate size protein, L99A T4 Lysozyme. In order to find the slow modes of the system we combined data coming from NMR experiments as well as short MD simulations. A Metadynamics simulation based on these information reveals the presence of two intermediate states, at an affordable computational cost.


1988 ◽  
Vol 130 ◽  
pp. 557-557
Author(s):  
August E. Evrard

The next move forward in simulations of cosmological structure is to include the hydrodynamics and thermal history of a gaseous component. The task is not an easy one. The dynamic range is wide in all interesting quantities (density, temperature, length-scales, time-scales, etc.). Generic initial mass distributions sampled from Gaussian random fields will, for many interesting power spectra, lead to a high degree of substructure present at all stages of the evolution. Grid-based hydrodynamic techniques currently lack the resolution necessary to evolve several levels of a clustering hierarchy simultaneously. A particle-based method known as SPH (Smoothed Particle Hydrodynamics, see Monoghan (1985) for a review) appears best suited for cosmological application. I have recently imbedded the technique into the P3M N-body code, described by Efstathiou et al. (1985) and used extensively by Efstathiou and collaborators, most recently in investigations of the cold dark matter scenario.


1988 ◽  
Vol 32 (04) ◽  
pp. 297-304
Author(s):  
Y. N. Chen ◽  
S. A. Mavrakis

Spectral fatigue analysis frequently has been applied to welded joints in steel offshore structures. Although, on the theoretical basis, the spectral formulation holds certain advantages over other formulations such as the discrete, design wave type of analysis, numerical methods developed on that basis generally suffer from the shortcomings of lack of precision and high computational cost. This paper synthesizes the uncertainties resulting from modeling errors that are regarded heretofore as unavoidable in an analysis. Such errors are traced to the approximations introduced in handling of wave data, in numerical integration of the response power spectra, and in the integration that leads to the determination of cumulative fatigue damage. To each of these sources of modeling error, a transparent, closed-form method is proposed which not only eliminates the potential errors but, surprisingly, improves the computational efficiency many times. The sensitivity of fatigue damage upon the variability of the shape parameter due to variability of wave environment for the so-called simplified analysis utilizing an idealized mathematical long-term probability density function (for example, the Weibull distribution) is also discussed.


Sign in / Sign up

Export Citation Format

Share Document