posterior probability density
Recently Published Documents


TOTAL DOCUMENTS

54
(FIVE YEARS 18)

H-INDEX

10
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Tokunbo Ogunfunmi ◽  
Manas Deb

In Bayesian learning, the posterior probability density of a model parameter is estimated from the likelihood function and the prior probability of the parameter. The posterior probability density estimate is refined as more evidence becomes available. However, any non-trivial Bayesian model requires the computation of an intractable integral to obtain the probability density function (PDF) of the evidence. Markov Chain Monte Carlo (MCMC) is a well-known algorithm that solves this problem by directly generating the samples of the posterior distribution without computing this intractable integral. We present a novel perspective of the MCMC algorithm which views the samples of a probability distribution as a dynamical system of Information Theoretic particles in an Information Theoretic field. As our algorithm probes this field with a test particle, it is subjected to Information Forces from other Information Theoretic particles in this field. We use Information Theoretic Learning (ITL) techniques based on Rényi’s α-Entropy function to derive an equation for the gradient of the Information Potential energy of the dynamical system of Information Theoretic particles. Using this equation, we compute the Hamiltonian of the dynamical system from the Information Potential energy and the kinetic energy. The Hamiltonian is used to generate the Markovian state trajectories of the system.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7325
Author(s):  
Mohamed Khalaf-Allah

At least four non-coplanar anchor nodes (ANs) are required for the time-of-arrival (ToA)-based three-dimensional (3D) positioning to enable unique position estimation. Direct method (DM) and particle filter (PF) algorithms were developed to address the three-anchor ToA-based 3D positioning problem. The proposed DM reduces this problem to the solution of a quadratic equation, exploiting the knowledge about the workspace, to first estimate the x- or z-coordinate, and then the remaining two coordinates. The implemented PF uses 1000 particles to represent the posterior probability density function (PDF) of the AN’s 3D position. The prediction step generates new particles by a resampling procedure. The ToA measurements determine the importance of these particles to enable updating the posterior PDF and estimating the 3D position of the AN. Simulation results corroborate the viability of the developed DM and PF algorithms, in terms of accuracy and computational cost, in the pursuit and circumnavigation scenarios, and even with a horizontally coplanar arrangement of the three ANs. Therefore, it is possible to enable applications requiring real-time positioning, such as unmanned aerial vehicle (UAV) autonomous docking and circling a stationary (or moving) position, without the need for an excessive number of ANs.


2020 ◽  
pp. 147592172097935
Author(s):  
Meijie Zhao ◽  
Yong Huang ◽  
Wensong Zhou ◽  
Hui Li

In this article, a new Bayesian approach for guided-wave-based multidamage localization by employing Gibbs sampling is proposed. By using the information of time-of-flight (ToF) embedded in guided wave signals, the posterior probability distributions of three parameter groups, that is, the horizontal and vertical coordinates of the multidamage locations (x, y) and wave velocity v, are characterized using Gibbs sampling samples. To obtain the analytical form of the conditional posterior probability density function of each parameter group conditional on the other two and the available ToF data, a first-order Taylor expansion of the nonlinear ToF-based damage localization model with respect to each parameter group is performed. Two Gibbs sampling algorithms are proposed, which differ in their strategies to address the posterior uncertainty of the prediction error parameter; however, both algorithms iteratively sample from conditional posterior probability density functions of three parameter groups. Therefore, the effective number of dimensions for Gibbs sampling is always three, regardless of the number of defects. The final damage localization results are obtained by grouping all ToFs and then comparing the posterior uncertainty of localization results of each grouping scheme to obtain the most reliable sampling results among all candidates. The proposed method not only identifies the group velocity but also localizes multiple defects by sharing the same characteristics of damage localization. Furthermore, this method can quantify the uncertainty of multidamage localization to automatically find the most reliable damage locations. The effectiveness and robustness of the proposed algorithms are validated by both numerical and experimental examples.


Author(s):  
S J Schmidt ◽  
A I Malz ◽  
J Y H Soo ◽  
I A Almosallam ◽  
M Brescia ◽  
...  

Abstract Many scientific investigations of photometric galaxy surveys require redshift estimates, whose uncertainty properties are best encapsulated by photometric redshift (photo-z) posterior probability density functions (PDFs). A plethora of photo-z PDF estimation methodologies abound, producing discrepant results with no consensus on a preferred approach. We present the results of a comprehensive experiment comparing twelve photo-z algorithms applied to mock data produced forLarge Synoptic Survey Telescope The Rubin Observatory Legacy Survey of Space and Time (lsst) Dark Energy Science Collaboration (desc). By supplying perfect prior information, in the form of the complete template library and a representative training set as inputs to each code, we demonstrate the impact of the assumptions underlying each technique on the output photo-z PDFs. In the absence of a notion of true, unbiased photo-z PDFs, we evaluate and interpret multiple metrics of the ensemble properties of the derived photo-z PDFs as well as traditional reductions to photo-z point estimates. We report systematic biases and overall over/under-breadth of the photo-z PDFs of many popular codes, which may indicate avenues for improvement in the algorithms or implementations. Furthermore, we raise attention to the limitations of established metrics for assessing photo-z PDF accuracy; though we identify the conditional density estimate (CDE) loss as a promising metric of photo-z PDF performance in the case where true redshifts are available but true photo-z PDFs are not, we emphasize the need for science-specific performance metrics.


2020 ◽  
pp. 1-39
Author(s):  
Xiaoyi Han ◽  
Lung-Fei Lee ◽  
Xingbai Xu

Abstract This paper studies asymptotic properties of a posterior probability density and Bayesian estimators of spatial econometric models in the classical statistical framework. We focus on the high-order spatial autoregressive model with spatial autoregressive disturbance terms, due to a computational advantage of Bayesian estimation. We also study the asymptotic properties of Bayesian estimation of the spatial autoregressive Tobit model, as an example of nonlinear spatial models. Simulation studies show that even when the sample size is small or moderate, the posterior distribution of parameters is well approximated by a normal distribution, and Bayesian estimators have satisfactory performance, as classical large sample theory predicts.


Algorithms ◽  
2020 ◽  
Vol 13 (6) ◽  
pp. 144
Author(s):  
Christin Bobe ◽  
Daan Hanssens ◽  
Thomas Hermans ◽  
Ellen Van De Vijver

Often, multiple geophysical measurements are sensitive to the same subsurface parameters. In this case, joint inversions are mostly preferred over two (or more) separate inversions of the geophysical data sets due to the expected reduction of the non-uniqueness in the joint inverse solution. This reduction can be quantified using Bayesian inversions. However, standard Markov chain Monte Carlo (MCMC) approaches are computationally expensive for most geophysical inverse problems. We present the Kalman ensemble generator (KEG) method as an efficient alternative to the standard MCMC inversion approaches. As proof of concept, we provide two synthetic studies of joint inversion of frequency domain electromagnetic (FDEM) and direct current (DC) resistivity data for a parameter model with vertical variation in electrical conductivity. For both studies, joint results show a considerable improvement for the joint framework over the separate inversions. This improvement consists of (1) an uncertainty reduction in the posterior probability density function and (2) an ensemble mean that is closer to the synthetic true electrical conductivities. Finally, we apply the KEG joint inversion to FDEM and DC resistivity field data. Joint field data inversions improve in the same way seen for the synthetic studies.


2020 ◽  
pp. 147592172092125
Author(s):  
Xiaoyou Wang ◽  
Rongrong Hou ◽  
Yong Xia ◽  
Xiaoqing Zhou

Existing studies on sparse Bayesian learning for structural damage detection usually assume that the posterior probability density functions follow standard distributions which facilitate to circumvent the intractable integration problem of the evidence by means of numerical sampling or analytical derivation. Moreover, the uncertainties of each mode are usually quantified as a common parameter to simplify the calculation. These assumptions may not be realistic in practice. This study proposes a sparse Bayesian method for structural damage detection suitable for standard and nonstandard probability distributions. The uncertainty corresponding to each mode is assumed as different. Variational Bayesian inference is developed and the posterior probability density functions of each unknown are individually derived. The parameters are found to follow the gamma distribution, whereas the distribution of the damage index cannot be directly obtained because of the nonlinear relationship in its posterior probability density function. The delayed rejection adaptive Metropolis algorithm is then adopted to generate numerical samples of the damage index. The coupled damage index and parameters in the variational Bayesian inference are successively calculated via an iterative process. A laboratory-tested frame is utilised to verify the effectiveness of the proposed method. The results indicate that the sparse damage can be accurately detected. The proposed method has the advantage of high accuracy and broad applicability.


Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1374
Author(s):  
Guolei Zhu ◽  
Yingmin Wang ◽  
Qi Wang

In order to improve the robustness and positioning accuracy of the matched field processing (MFP) in underwater acoustic systems, we propose a conditional probability constraint matched field processing (MFP-CPC) algorithm in this paper, which protects the main-lobe and suppresses the side-lobe to the AMFP by the constraint parameters, such as the posterior probability density of source locations obtained by Bayesian criterion under the assumption of white Gaussian noise. Under such constraint, the proposed MFP-CPC algorithm not only has the same merit of a high resolution as AMFP but also improves the robustness. To evaluate the algorithm, the simulated and experimental data in an uncertain shallow ocean environment is used. From the results, MFP-CPC is robust to the moored source, as well as the moving source. In addition, the localization and tracking performances of using the proposed algorithm are consistent with the trajectory of the moving source.


2020 ◽  
Author(s):  
Mehrdad Pakzad ◽  
Mahnaz Khalili ◽  
Shaghayegh Vahidravesh

Abstract. Monte Carlo Markov chain (MCMC) samplings can obtain a set of samples by directed random walk, mapping the posterior probability density of the model parameters in Bayesian framework. We perform earthquake waveform inversion to retrieve focal angles or the elements of moment tensor and source location using a Bayesian MCMC method with the constraints of first-motion polarities and double couple percentage using full Green functions and data covariance matrix. The algorithm tests the compatibility with polarities and also checks the double couple percentage of every site before the time-consuming synthetic seismogram computation for every sample of moment tensor of every trial source position. Other than large earthquakes, the method is especially suitable for weak events (M 


2019 ◽  
Vol 24 (1) ◽  
pp. 349-354
Author(s):  
Trond Mannseth

AbstractAssimilation of a sequence of linearly dependent data vectors, $\{d_{l}\}^{L}_{l=1}${dl}l=1L such that ${d_{l} = B_{l}d_{L}}^{L-1}_{ l=1}$dl=BldLl=1L−1, is considered for a parameter estimation problem. Such a data sequence can occur, for example, in the context of multilevel data assimilation. Since some information is used several times when linearly dependent data vectors are assimilated, the associated data-error covariances must be modified. I develop a condition that the modified covariances must satisfy in order to sample correctly from the posterior probability density function of the uncertain parameter in the linear-Gaussian case. It is shown that this condition is a generalization of the well-known condition that must be satisfied when assimilating the same data vector multiple times. I also briefly discuss some qualitative and computational issues related to practical use of the developed condition.


Sign in / Sign up

Export Citation Format

Share Document