scholarly journals Entropy, Information, and the Updating of Probabilities

Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 895
Author(s):  
Ariel Caticha

This paper is a review of a particular approach to the method of maximum entropy as a general framework for inference. The discussion emphasizes pragmatic elements in the derivation. An epistemic notion of information is defined in terms of its relation to the Bayesian beliefs of ideally rational agents. The method of updating from a prior to posterior probability distribution is designed through an eliminative induction process. The logarithmic relative entropy is singled out as a unique tool for updating (a) that is of universal applicability, (b) that recognizes the value of prior information, and (c) that recognizes the privileged role played by the notion of independence in science. The resulting framework—the ME method—can handle arbitrary priors and arbitrary constraints. It includes the MaxEnt and Bayes’ rules as special cases and, therefore, unifies entropic and Bayesian methods into a single general inference scheme. The ME method goes beyond the mere selection of a single posterior, and also addresses the question of how much less probable other distributions might be, which provides a direct bridge to the theories of fluctuations and large deviations.

2013 ◽  
Vol 734-737 ◽  
pp. 3071-3074
Author(s):  
Guo Dong Zhang ◽  
Zhong Liu

Aiming at the phenomenon that the chaff and corner reflector released by surface ship can influence the selection of missile seeker, this paper proposed a multi-target selection method based on the prior information of false targets distribution and Support Vector Machine (SVM). By analyzing the false targets distribution law we obtain two classification principles, which are used to train the SVM studies the true and false target characteristics. The trained SVM is applied to the seeker in the target selection. This method has advantages of simple programming and high classification accuracy, and the simulation experiment in this paper confirms the correctness and effectiveness of this method.


2002 ◽  
Vol 39 (2) ◽  
pp. 253-261 ◽  
Author(s):  
Frenkel Ter Hofstede ◽  
Youngchan Kim ◽  
Michel Wedel

The authors propose a general model that includes the effects of discrete and continuous heterogeneity as well as self-stated and derived attribute importance in hybrid conjoint studies. Rather than use the self-stated importances as prior information, as has been done in several previous approaches, the authors consider them data and therefore include them in the formulation of the likelihood, which helps investigate the relationship of self-stated and derived importances at the individual level. The authors formulate several special cases of the model and estimate them using the Gibbs sampler. The authors reanalyze Srinivasan and Park's (1997) data and show that the current model predicts real choices better than competing models do. The posterior credible intervals of the predictions of models with the different heterogeneity specifications overlap, so there is no clear superior specification of heterogeneity. However, when different sources of data are used—that is, full profile evaluations, self-stated importances, or both—clear differences arise in the accuracy of predictions. Moreover, the authors find that including the self-stated importances in the likelihood leads to much better predictions than does considering them prior information.


Author(s):  
Munir S Pathan ◽  
S M Pradhan ◽  
T Palani Selvam

Abstract In this study, the Bayesian probabilistic approach is applied for the estimation of the actual dose using personnel monitoring dose records of occupational workers. To implement the Bayesian approach, the probability distribution of the uncertainty in the reported dose as a function of the actual dose is derived. Using the uncertainty distribution function of reported dose and prior knowledge of dose levels generally observed in a monitoring period, the posterior probability distribution of the actual dose is estimated. The posterior distributions of each monitoring period in a year are convoluted to arrive at actual annual dose distribution. The estimated actual doses distributions show a significant deviation from reported annual doses particularly for low annual doses.


2017 ◽  
Vol 13 (8) ◽  
pp. 155014771772671
Author(s):  
Jiuqing Wan ◽  
Shaocong Bu ◽  
Jinsong Yu ◽  
Liping Zhong

This article proposes a hybrid dynamic belief propagation for simultaneous localization and mapping in the mobile robot network. The positions of landmarks and the poses of moving robots at each time slot are estimated simultaneously in an online and distributed manner, by fusing the odometry data of each robot and the measurements of robot–robot or robot–landmark relative distance and angle. The joint belief state of all robots and landmarks is encoded by a factor graph and the marginal posterior probability distribution of each variable is inferred by belief propagation. We show how to calculate, broadcast, and update messages between neighboring nodes in the factor graph. Specifically, we combine parametric and nonparametric techniques to tackle the problem arisen from non-Gaussian distributions and nonlinear models. Simulation and experimental results on publicly available dataset show the validity of our algorithm.


2020 ◽  
Vol 09 (04) ◽  
pp. 2050017
Author(s):  
Benjamin D. Donovan ◽  
Randall L. McEntaffer ◽  
Casey T. DeRoo ◽  
James H. Tutt ◽  
Fabien Grisé ◽  
...  

The soft X-ray grating spectrometer on board the Off-plane Grating Rocket Experiment (OGRE) hopes to achieve the highest resolution soft X-ray spectrum of an astrophysical object when it is launched via suborbital rocket. Paramount to the success of the spectrometer are the performance of the [Formula: see text] reflection gratings populating its reflection grating assembly. To test current grating fabrication capabilities, a grating prototype for the payload was fabricated via electron-beam lithography at The Pennsylvania State University’s Materials Research Institute and was subsequently tested for performance at Max Planck Institute for Extraterrestrial Physics’ PANTER X-ray Test Facility. Bayesian modeling of the resulting data via Markov chain Monte Carlo (MCMC) sampling indicated that the grating achieved the OGRE single-grating resolution requirement of [Formula: see text] at the 94% confidence level. The resulting [Formula: see text] posterior probability distribution suggests that this confidence level is likely a conservative estimate though, since only a finite [Formula: see text] parameter space was sampled and the model could not constrain the upper bound of [Formula: see text] to less than infinity. Raytrace simulations of the tested system found that the observed data can be reproduced with a grating performing at [Formula: see text]. It is therefore postulated that the behavior of the obtained [Formula: see text] posterior probability distribution can be explained by a finite measurement limit of the system and not a finite limit on [Formula: see text]. Implications of these results and improvements to the test setup are discussed.


JAMIA Open ◽  
2020 ◽  
Author(s):  
Xiang Gao ◽  
Qunfeng Dong

Abstract A common research task in COVID-19 studies often involves the prevalence estimation of certain medical outcomes. Although point estimates with confidence intervals are typically obtained, a better approach is to estimate the entire posterior probability distribution of the prevalence, which can be easily accomplished with a standard Bayesian approach using binomial likelihood and its conjugate beta prior distribution. Using two recently published COVID-19 data sets, we performed Bayesian analysis to estimate the prevalence of infection fatality in Iceland and asymptomatic children in the United States.


2018 ◽  
Vol 23 (4) ◽  
pp. 65 ◽  
Author(s):  
Kaijun Peng ◽  
Jieqing Tan ◽  
Zhiming Li ◽  
Li Zhang

In this paper, a ternary 4-point rational interpolation subdivision scheme is presented, and the necessary and sufficient conditions of the continuity are analyzed. The generalization incorporates existing schemes as special cases: Hassan–Ivrissimtzis’s scheme, Siddiqi–Rehan’s scheme, and Siddiqi–Ahmad’s scheme. Furthermore, the fractal behavior of the scheme is investigated and analyzed, and the range of the parameter of the fractal curve is the neighborhood of the singular point of the rational scheme. When the fractal curve and surface are reconstructed, it is convenient for the selection of parameter values.


1992 ◽  
Vol 4 (3) ◽  
pp. 415-447 ◽  
Author(s):  
David J. C. MacKay

Although Bayesian analysis has been in use since Laplace, the Bayesian method of model-comparison has only recently been developed in depth. In this paper, the Bayesian approach to regularization and model-comparison is demonstrated by studying the inference problem of interpolating noisy data. The concepts and methods described are quite general and can be applied to many other data modeling problems. Regularizing constants are set by examining their posterior probability distribution. Alternative regularizers (priors) and alternative basis sets are objectively compared by evaluating the evidence for them. “Occam's razor” is automatically embodied by this process. The way in which Bayes infers the values of regularizing constants and noise levels has an elegant interpretation in terms of the effective number of parameters determined by the data set. This framework is due to Gull and Skilling.


Sign in / Sign up

Export Citation Format

Share Document