scholarly journals On the iterated a posteriori distribution in Bayesian statistics

2007 ◽  
Vol 74 (00) ◽  
pp. 163-171
Author(s):  
F. Recker
Geophysics ◽  
1995 ◽  
Vol 60 (4) ◽  
pp. 1169-1177 ◽  
Author(s):  
Mauricio D. Sacchi ◽  
Tadeusz J. Ulrych

We present a high‐resolution procedure to reconstruct common‐midpoint (CMP) gathers. First, we describe the forward and inverse transformations between offset and velocity space. Then, we formulate an underdetermined linear inverse problem in which the target is the artifacts‐free, aperture‐compensated velocity gather. We show that a sparse inversion leads to a solution that resembles the infinite‐aperture velocity gather. The latter is the velocity gather that should have been estimated with a simple conjugate operator designed from an infinite‐aperture seismic array. This high‐resolution velocity gather is then used to reconstruct the offset space. The algorithm is formally derived using two basic principles. First, we use the principle of maximum entropy to translate prior information about the unknown parameters into a probabilistic framework, in other words, to assign a probability density function to our model. Second, we apply Bayes’s rule to relate the a priori probability density function (pdf) with the pdf corresponding to the experimental uncertainties (likelihood function) to construct the a posteriori distribution of the unknown parameters. Finally the model is evaluated by maximizing the a posteriori distribution. When the problem is correctly regularized, the algorithm converges to a solution characterized by different degrees of sparseness depending on the required resolution. The solutions exhibit minimum entropy when the entropy is measured in terms of Burg’s definition. We emphasize two crucial differences in our approach with the familiar Burg method of maximum entropy spectral analysis. First, Burg’s entropy is minimized rather than maximized, which is equivalent to inferring as much as possible about the model from the data. Second, our approach uses the data as constraints in contrast with the classic maximum entropy spectral analysis approach where the autocorrelation function is the constraint. This implies that we recover not only amplitude information but also phase information, which serves to extrapolate the data outside the original aperture of the array. The tradeoff is controlled by a single parameter that under asymptotic conditions reduces the method to a damped least‐squares solution. Finally, the high‐resolution or aperture‐compensated velocity gather is used to extrapolate near‐ and far‐offset traces.


Author(s):  
David Kipping

Abstract Astronomy has always been propelled by the discovery of new phenomena lacking precedent, often followed by new theories to explain their existence and properties. In the modern era of large surveys tiling the sky at ever high precision and sampling rates, these serendipitous discoveries look set to continue, with recent examples including Boyajian’s Star, Fast Radio Bursts and ‘Oumuamua. Accordingly, we here look ahead and aim to provide a statistical framework for interpreting such events and providing guidance to future observations, under the basic premise that the phenomenon in question stochastically repeat at some unknown, constant rate, λ. Specifically, expressions are derived for 1) the a-posteriori distribution for λ, 2) the a-posteriori distribution for the recurrence time, and, 3) the benefit-to-cost ratio of further observations relative to that of the inaugural event. Some rule-of-thumb results for each of these are found to be 1) $\lambda < \lbrace 0.7, 2.3, 4.6\rbrace \, t_1^{-1}$ to $\lbrace 50, 90, 95\rbrace {{\ \rm per\ cent}}$ confidence (where t1 = time to obtain the first detection), 2) the recurrence time is t2 < {1, 9, 99} t1 to $\lbrace 50, 90, 95\rbrace {{\ \rm per\ cent}}$ confidence, with a lack of repetition by time t2 yielding a p-value of 1/[1 + (t2/t1)], and, 3) follow-up for ≲ 10 t1 is expected to be scientifically worthwhile under an array of differing assumptions about the object’s intrinsic scientific value. We apply these methods to the Breakthrough Listen Candidate 1 signal and tidal disruption events observed by TESS.


Geophysics ◽  
1991 ◽  
Vol 56 (7) ◽  
pp. 1003-1014 ◽  
Author(s):  
F. J. Jacobs ◽  
P. A. G. van der Geest

A novel method for the inversion of band‐limited seismic traces to full bandwidth reflectivity traces, is based on a probabilistic spiky model of the reflectivity trace, in which position indicators and amplitudes of the spikes occur as random variables, and relies on relative entropy inference from information theory. First, an a priori model for general reflectivity traces in the prospect is derived from nearby wells. Second, the a priori distribution is updated into an a posteriori distribution for the specific trace being studied by the addition of the Fourier data of the seismic trace within a passband. Uncertainty about the Fourier coefficients can be accounted for by specification of a noise variance, which implicitly is infinite outside the passband. The update with relative entropy inference is justified because of its relationship with Bayesian inference. Application of maximum a posteriori (MAP) estimation to the a posteriori distribution results in the most likely spiky reflectivity trace of full bandwidth. A numerical algorithm for obtaining the MAP estimates of spike positions and spike amplitudes is derived from the concept of continuation and is described in detail. The algorithm avoids searching among all possible patterns of spike positions.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6011 ◽  
Author(s):  
Jan Steinbrener ◽  
Konstantin Posch ◽  
Jürgen Pilz

We present a novel approach for training deep neural networks in a Bayesian way. Compared to other Bayesian deep learning formulations, our approach allows for quantifying the uncertainty in model parameters while only adding very few additional parameters to be optimized. The proposed approach uses variational inference to approximate the intractable a posteriori distribution on basis of a normal prior. By representing the a posteriori uncertainty of the network parameters per network layer and depending on the estimated parameter expectation values, only very few additional parameters need to be optimized compared to a non-Bayesian network. We compare our approach to classical deep learning, Bernoulli dropout and Bayes by Backprop using the MNIST dataset. Compared to classical deep learning, the test error is reduced by 15%. We also show that the uncertainty information obtained can be used to calculate credible intervals for the network prediction and to optimize network architecture for the dataset at hand. To illustrate that our approach also scales to large networks and input vector sizes, we apply it to the GoogLeNet architecture on a custom dataset, achieving an average accuracy of 0.92. Using 95% credible intervals, all but one wrong classification result can be detected.


Geophysics ◽  
1991 ◽  
Vol 56 (12) ◽  
pp. 2008-2018 ◽  
Author(s):  
Marc Lavielle

Inverse problems can be solved in different ways. One way is to define natural criteria of good recovery and build an objective function to be minimized. If, instead, we prefer a Bayesian approach, inversion can be formulated as an estimation problem where a priori information is introduced and the a posteriori distribution of the unobserved variables is maximized. When this distribution is a Gibbs distribution, these two methods are equivalent. Furthermore, global optimization of the objective function can be performed with a Monte Carlo technique, in spite of the presence of numerous local minima. Application to multitrace deconvolution is proposed. In traditional 1-D deconvolution, a set of uni‐dimensional processes models the seismic data, while a Markov random field is used for 2-D deconvolution. In fact, the introduction of a neighborhood system permits one to model the layer structure that exists in the earth and to obtain solutions that present lateral coherency. Moreover, optimization of an appropriated objective function by simulated annealing allows one to control the fit with the input data as well as the spatial distribution of the reflectors. Extension to 3-D deconvolution is straightforward.


2018 ◽  
Vol 12 (4) ◽  
pp. 245 ◽  
Author(s):  
Luiz Henrique Marra da Silva Ribeiro ◽  
Matheus De Souza Costa ◽  
Luiz Alberto Beijo ◽  
Alberto Frank Lázaro Aguirre ◽  
Tatiane Gomes de Araújo ◽  
...  

The Bayesian approach in regression models has shown good results in parameter estimations, where it can increase accuracy and precision. The objective of the current study was to analyze the application of Bayesian statistics to the modeling yield for leaf dry matter (LM) and stem (SM), in kg ha-1, leaf ratio (LR), crude protein content for leaves (CPL) and stem (CPS) (%) of Brachiaria grass as a function of varying N doses (0; 100; 200 and 300 kg ha-1 yr-1). Simple and two degree polynomial linear regression models were analyzed. Information for a priori distributions was obtained from the literature. A posteriori distribution was generated using a Monte Carlo method via Markov chains. Parameters significance was assyed with HPD (Highest Posteriori Density) with a 95% interval. Model selections was performed using DIC (Deviance Information Criterion); and adjustment quality estimated with means and 95% HPD for Bayesian R2 distribution ranges. The models selected for the variables LM, SM and CPS were linear, while for LR and CPL, they were second level polynomial. The lowest doses that maximize response variables were: LM: 274 ha-1yr-1, SM: 280 ha-1yr-1, LR: 113 ha-1yr-1, CPL: 265 ha-1yr-1, CPS: 289 ha-1yr-1. The Bayesian approach allowed the inclusion of literatureverified a priori information, and the identification of evidence optimization range intervals.


1994 ◽  
Vol 88 (2) ◽  
pp. 327-335 ◽  
Author(s):  
John E. Roemer

A continuum of voters, indexed by income, have preferences over economic outcomes. Two political parties each represent the interests of given constituencies of voters: the rich and the poor. Parties/candidates put forth policies—for instance, tax policy, where taxes finance a public good. Voters are uncertain about the theory of the economy, the function that maps policies into economic outcomes. Parties argue, as well, for theories of the economy. Each voter has a prior probability distribution over possible theories of the economy; after parties announce their theories of the economy, each voter constructs an a posteriori distribution over such theories. Suppose that voters are unsure how efficiently the government converts tax revenues into the public good. Under reasonable assumptions the party representing the rich argues that the government is very inefficient and the party representing the poor argues the opposite. What appear as liberal and conservative ideological views emerge as simply good strategies in the electoral game.


Author(s):  
Arno J. Bleeker ◽  
Mark H.F. Overwijk ◽  
Max T. Otten

With the improvement of the optical properties of the modern TEM objective lenses the point resolution is pushed beyond 0.2 nm. The objective lens of the CM300 UltraTwin combines a Cs of 0. 65 mm with a Cc of 1.4 mm. At 300 kV this results in a point resolution of 0.17 nm. Together with a high-brightness field-emission gun with an energy spread of 0.8 eV the information limit is pushed down to 0.1 nm. The rotationally symmetric part of the phase contrast transfer function (pctf), whose first zero at Scherzer focus determines the point resolution, is mainly determined by the Cs and defocus. Apart from the rotationally symmetric part there is also the non-rotationally symmetric part of the pctf. Here the main contributors are not only two-fold astigmatism and beam tilt but also three-fold astigmatism. The two-fold astigmatism together with the beam tilt can be corrected in a straight-forward way using the coma-free alignment and the objective stigmator. However, this only works well when the coefficient of three-fold astigmatism is negligible compared to the other aberration coefficients. Unfortunately this is not generally the case with the modern high-resolution objective lenses. Measurements done at a CM300 SuperTwin FEG showed a three fold-astigmatism of 1100 nm which is consistent with measurements done by others. A three-fold astigmatism of 1000 nm already sinificantly influences the image at a spatial frequency corresponding to 0.2 nm which is even above the point resolution of the objective lens. In principle it is possible to correct for the three-fold astigmatism a posteriori when through-focus series are taken or when off-axis holography is employed. This is, however not possible for single images. The only possibility is then to correct for the three-fold astigmatism in the microscope by the addition of a hexapole corrector near the objective lens.


2005 ◽  
Author(s):  
Damon U. Bryant ◽  
Ashley K. Smith ◽  
Sandra G. Alexander ◽  
Kathlea Vaughn ◽  
Kristophor G. Canali

Sign in / Sign up

Export Citation Format

Share Document