scholarly journals Bayesian Random Tomography of Particle Systems

2021 ◽  
Vol 8 ◽  
Author(s):  
Nima Vakili ◽  
Michael Habeck

Random tomography is a common problem in imaging science and refers to the task of reconstructing a three-dimensional volume from two-dimensional projection images acquired in unknown random directions. We present a Bayesian approach to random tomography. At the center of our approach is a meshless representation of the unknown volume as a mixture of spherical Gaussians. Each Gaussian can be interpreted as a particle such that the unknown volume is represented by a particle cloud. The particle representation allows us to speed up the computation of projection images and to represent a large variety of structures accurately and efficiently. We develop Markov chain Monte Carlo algorithms to infer the particle positions as well as the unknown orientations. Posterior sampling is challenging due to the high dimensionality and multimodality of the posterior distribution. We tackle these challenges by using Hamiltonian Monte Carlo and a global rotational sampling strategy. We test the approach on various simulated and real datasets.

Author(s):  
Galiya Z. Lotova

AbstractSome problems of the theory of electron transfer in gases under the action of a strong external electric field is considered in the paper. Based on the three-dimensional ELSHOW algorithm, samples of states of particles in an electron avalanche are obtained for a given time moment in order to calculate the corresponding ‘diffusion radii’ and diffusion coefficients. Randomized projection estimators and kernel estimators (for test purpose) are constructed with the use of grouped samples for evaluation of the distribution density of particles in an avalanche. Test computations demonstrate a high efficiency of projection estimators for calculation of diffusive characteristics.


2021 ◽  
Author(s):  
Lars Gebraad ◽  
Sölvi Thrastarson ◽  
Andrea Zunino ◽  
Andreas Fichtner

<p><span>Uncertainty quantification is an essential part of many studies in Earth science. It allows us, for example, to assess the quality of tomographic reconstructions, quantify hypotheses and make physics-based risk assessments. In recent years there has been a surge in applications of uncertainty quantification in seismological inverse problems. This is mainly due to increasing computational power and the ‘discovery’ of optimal use cases for many algorithms (e.g., gradient-based Markov Chain Monte Carlo (MCMC). Performing Bayesian inference using these methods allows seismologists to perform advanced uncertainty quantification. However, oftentimes, Bayesian inference is still prohibitively expensive due to large parameter spaces and computationally expensive physics.</span></p><p><span>Simultaneously, machine learning has found its way into parameter estimation in geosciences. Recent works show that machine learning both allows one to accelerate repetitive inferences [e.g. </span>Shahraeeni & Curtis 2011, <span>Cao et al. 2020] as well as speed up single-instance Monte Carlo algorithms </span><span>using surrogate networks </span><span>[Aleardi 2020]. These advances allow seismologists to use machine learning as a tool to bring accurate inference on the subsurface to scale.</span></p><p>In this work, we propose the novel inclusion of adjoint modelling in machine learning accelerated inverse problems. The aforementioned references train machine learning models on observations of the misfit function. This is done with the aim of creating surrogate but accelerated models for the misfit computations, which in turn allows one to compute this function and its gradients much faster. This approach ignores that many physical models have an adjoint state, allowing one to compute gradients using only one additional simulation.</p><p>The inclusion of this information within gradient-based sampling creates performance gains in both training the surrogate and the sampling of the true posterior. We show how machine learning models that approximate misfits and gradients specifically trained using adjoint methods accelerate various types of inversions and bring Bayesian inference to scale. Practically, the proposed method simply allows us to utilize information from previous MCMC samples in the algorithm proposal step.</p><p>The application of the proposed machinery is in settings where models are extensively and repetitively run. Markov chain Monte Carlo algorithms, which may require millions of evaluations of the forward modelling equations, can be accelerated by off-loading these simulations to neural nets. This approach is also promising for tomographic monitoring, where experiments are repeatedly performed. Lastly, the efficiently trained neural nets can be used to learn a likelihood for a given dataset, to which subsequently different priors can be efficiently applied.<span> We show examples of all these use cases.</span></p><p> </p><p>Lars Gebraad, Christian Boehm and Andreas Fichtner, 2020: Bayesian Elastic Full‐Waveform Inversion Using Hamiltonian Monte Carlo.</p><p>Ruikun Cao, Stephanie Earp, Sjoerd A. L. de Ridder, Andrew Curtis, and Erica Galetti, 2020: Near-real-time near-surface 3D seismic velocity and uncertainty models by wavefield gradiometry and neural network inversion of ambient seismic noise.</p><p>Mohammad S. Shahraeeni and Andrew Curtis, 2011: Fast probabilistic nonlinear petrophysical inversion.</p><p><span>Mattia Aleardi, 2020: Combining discrete cosine transform and convolutional neural networks to speed up the Hamiltonian Monte Carlo inversion of pre‐stack seismic data.</span></p>


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Chuanyuan Zhou ◽  
Zhenyu Liu ◽  
Chan Qiu ◽  
Jianrong Tan

Purpose The conventional statistical method of three-dimensional tolerance analysis requires numerous pseudo-random numbers and consumes enormous computations to increase the calculation accuracy, such as the Monte Carlo simulation. The purpose of this paper is to propose a novel method to overcome the problems. Design/methodology/approach With the combination of the quasi-Monte Carlo method and the unified Jacobian-torsor model, this paper proposes a three-dimensional tolerance analysis method based on edge sampling. By setting reasonable evaluation criteria, the sequence numbers representing relatively smaller deviations are excluded and the remaining numbers are selected and kept which represent deviations approximate to and still comply with the tolerance requirements. Findings The case study illustrates the effectiveness and superiority of the proposed method in that it can reduce the sample size, diminish the computations, predict wider tolerance ranges and improve the accuracy of three-dimensional tolerance of precision assembly simultaneously. Research limitations/implications The proposed method may be applied only when the dimensional and geometric tolerances are interpreted in the three-dimensional tolerance representation model. Practical implications The proposed tolerance analysis method can evaluate the impact of manufacturing errors on the product structure quantitatively and provide a theoretical basis for structural design, process planning and manufacture inspection. Originality/value The paper is original in proposing edge sampling as a sampling strategy to generating deviation numbers in tolerance analysis.


2017 ◽  
Vol 139 (11) ◽  
Author(s):  
Robin Schmidt ◽  
Matthias Voigt ◽  
Konrad Vogeler ◽  
Marcus Meyer

This paper will compare two approaches of sensitivity analysis, namely (i) the adjoint method which is used to obtain an initial estimate of the geometric sensitivity of the gas-washed surfaces to aerodynamic quantities of interest and (ii) a Monte Carlo type simulation with an efficient sampling strategy. For both approaches, the geometry is parameterized using a modified NACA parameterization. First, the sensitivity of those parameters is calculated using the linear (first-order) adjoint model. Since the effort of the adjoint computational fluid dynamics (CFD) solution is comparable to that of the initial flow CFD solution and the sensitivity calculation is simply a postprocessing step, this approach yields fast results. However, it relies on a linear model which may not be adequate to describe the relationship between relevant aerodynamic quantities and actual geometric shape variations for the derived amplitudes of shape variations. Second, in order to better capture nonlinear and interaction effects, a Monte Carlo type simulation with an efficient sampling strategy is used to carry out the sensitivity analysis. The sensitivities are expressed by means of the coefficient of importance (CoI), which is calculated based on modified polynomial regression and therefore able to describe relationships of higher order. The methods are applied to a typical high-pressure compressor (HPC) stage. The impact of a variable rotor geometry is calculated by three-dimensional (3D) CFD simulations using a steady Reynolds-averaged Navier–Stokes model. The geometric variability of the rotor is based on the analysis of a set of 400 blades which have been measured using high-precision 3D optical measurement techniques.


1988 ◽  
Vol 102 ◽  
pp. 79-81
Author(s):  
A. Goldberg ◽  
S.D. Bloom

AbstractClosed expressions for the first, second, and (in some cases) the third moment of atomic transition arrays now exist. Recently a method has been developed for getting to very high moments (up to the 12th and beyond) in cases where a “collective” state-vector (i.e. a state-vector containing the entire electric dipole strength) can be created from each eigenstate in the parent configuration. Both of these approaches give exact results. Herein we describe astatistical(or Monte Carlo) approach which requires onlyonerepresentative state-vector |RV> for the entire parent manifold to get estimates of transition moments of high order. The representation is achieved through the random amplitudes associated with each basis vector making up |RV>. This also gives rise to the dispersion characterizing the method, which has been applied to a system (in the M shell) with≈250,000 lines where we have calculated up to the 5th moment. It turns out that the dispersion in the moments decreases with the size of the manifold, making its application to very big systems statistically advantageous. A discussion of the method and these dispersion characteristics will be presented.


Author(s):  
D.W. Andrews ◽  
F.P. Ottensmeyer

Shadowing with heavy metals has been used for many years to enhance the topological features of biological macromolecular complexes. The three dimensional features present in directionaly shadowed specimens often simplifies interpretation of projection images provided by other techniques. One difficulty with the method is the relatively large amount of metal used to achieve sufficient contrast in bright field images. Thick shadow films are undesirable because they decrease resolution due to an increased tendency for microcrystalline aggregates to form, because decoration artefacts become more severe and increased cap thickness makes estimation of dimensions more uncertain.The large increase in contrast provided by the dark field mode of imaging allows the use of shadow replicas with a much lower average mass thickness. To form the images in Fig. 1, latex spheres of 0.087 μ average diameter were unidirectionally shadowed with platinum carbon (Pt-C) and a thin film of carbon was indirectly evaporated on the specimen as a support.


Author(s):  
B. Carragher ◽  
M. Whittaker

Techniques for three-dimensional reconstruction of macromolecular complexes from electron micrographs have been successfully used for many years. These include methods which take advantage of the natural symmetry properties of the structure (for example helical or icosahedral) as well as those that use single axis or other tilting geometries to reconstruct from a set of projection images. These techniques have traditionally relied on a very experienced operator to manually perform the often numerous and time consuming steps required to obtain the final reconstruction. While the guidance and oversight of an experienced and critical operator will always be an essential component of these techniques, recent advances in computer technology, microprocessor controlled microscopes and the availability of high quality CCD cameras have provided the means to automate many of the individual steps.During the acquisition of data automation provides benefits not only in terms of convenience and time saving but also in circumstances where manual procedures limit the quality of the final reconstruction.


Sign in / Sign up

Export Citation Format

Share Document