scholarly journals Approaching probabilistic and deterministic nomic truths in an inductive probabilistic way

Synthese ◽  
2021 ◽  
Author(s):  
Theo A. F. Kuipers

AbstractTheories of truth approximation in terms of truthlikeness (or verisimilitude) almost always deal with (non-probabilistically) approaching deterministic truths, either actual or nomic. This paper deals first with approaching a probabilistic nomic truth, viz. a true probability distribution. It assumes a multinomial probabilistic context, hence with a lawlike true, but usually unknown, probability distribution. We will first show that this true multinomial distribution can be approached by Carnapian inductive probabilities. Next we will deal with the corresponding deterministic nomic truth, that is, the set of conceptually possible outcomes with a positive true probability. We will introduce Hintikkian inductive probabilities, based on a prior distribution over the relevant deterministic nomic theories and on conditional Carnapian inductive probabilities, and first show that they enable again probabilistic approximation of the true distribution. Finally, we will show, in terms of a kind of success theorem, based on Niiniluoto’s estimated distance from the truth, in what sense Hintikkian inductive probabilities enable the probabilistic approximation of the relevant deterministic nomic truth. In sum, the (realist) truth approximation perspective on Carnapian and Hintikkian inductive probabilities leads to the unification of the inductive probability field and the field of truth approximation.

1989 ◽  
Vol 3 (4) ◽  
pp. 453-475 ◽  
Author(s):  
P.J.M. Van Laarhoven ◽  
C.G.E. Boender ◽  
E.H.L. Aarts ◽  
A. H. G. Rinnooy Kan

Simulated annealing is a probabilistic algorithm for approximately solving large combinatorial optimization problems. The algorithm can mathematically be described as the generation of a series of Markov chains, in which each Markov chain can be viewed as the outcome of a random experiment with unknown parameters (the probability of sampling a cost function value). Assuming a probability distribution on the values of the unknown parameters (the prior distribution) and given the sequence of configurations resulting from the generation of a Markov chain, we use Bayes's theorem to derive the posterior distribution on the values of the parameters. Numerical experiments are described which show that the posterior distribution can be used to predict accurately the behavior of the algorithm corresponding to the next Markov chain. This information is also used to derive optimal rules for choosing some of the parameters governing the convergence of the algorithm.


2006 ◽  
Vol 6 (1) ◽  
Author(s):  
Ettore Damiano

This paper considers the problem of an agent's choice under uncertainty in a new framework. The agent does not know the true probability distribution over the state space but is objectively informed that it belongs to a specified set of probabilities. Maintaining the hypothesis that this agent is a subjective expected utility maximizer, we address the question of how the objective information influences her subjective prior.Three plausible rules are proposed. The first, named state independence, states that the subjective probability should not depend on how the uncertain states are `labeled'. Location-consistency, the second property, assumes that `similar' objective sets of probabilities result in `similar' subjective priors. The third rule is an `update-consistency' rule. Suppose the agent selects some probability p. She is then told that the likelihood assigned by p to some event A is in fact correct; then this should not cause her to revise her choice of p.Another property, alternative to update-consistency, is also proposed. When an agent forms her subjective prior assigning subjective probabilities to events in some ordered sequence, this property requires that the resulting prior be independent of that order. This last property, named order independence, is shown to be equivalent to update-consistency.A class of sets of probabilities is found on which state independence, location-consistency and update consistency (order independence) uniquely determine a selection rule. Some intuition is given regarding why these properties work in this collection of problems.


Author(s):  
Baisravan HomChaudhuri

Abstract This paper focuses on distributionally robust controller design for avoiding dynamic and stochastic obstacles whose exact probability distribution is unknown. The true probability distribution of the disturbance associated with an obstacle, although unknown, is considered to belong to an ambiguity set that includes all the probability distributions that share the same first two moment. The controller thus focuses on ensuring the satisfaction of the probabilistic collision avoidance constraints for all probability distributions in the ambiguity set, hence making the solution robust to the true probability distribution of the stochastic obstacles. Techniques from robust optimization methods are used to model the distributionally robust probabilistic or chance constraints as a semi-definite programming (SDP) problem with linear matrix inequality (LMI) constraints that can be solved in a computationally tractable fashion. Simulation results for a robot obstacle avoidance problem shows the efficacy of our method.


2020 ◽  
Vol 43 (2) ◽  
pp. 183-209
Author(s):  
Llerzy Esneider Torres Ome ◽  
Jose Rafael Tovar Cuevas

The main difficulties when using the Bayesian approach are obtaining information from the specialist and obtaining hyperparameters values of the assumed probability distribution as representative of knowledge  external to the  data. In addition to the  fact  that  a large  part  of the  literature on this subject is characterized by considering prior conjugated distributions for the parameter of interest. An method is proposed  to find the hyperparameters of a nonconjugated prior  distribution. The following  scenarios were considered for Bernoulli trials: four prior distributions (Beta, Kumaraswamy, Truncated Gamma   and   Truncated  Weibull) and four scenarios  for  the  generating process. Two necessary,  but not sufficient  conditions were  identified to ensure   the  existence of  a  vector of  values for  the  hyperparameter. The Truncated Weibull prior distribution performed the worst.  The methodology was  used  to estimate the  prevalence of two  transmitted sexually infections in an Colombian  indigenous community.


Entropy ◽  
2022 ◽  
Vol 24 (1) ◽  
pp. 125
Author(s):  
Damián G. Hernández ◽  
Inés Samengo

Inferring the value of a property of a large stochastic system is a difficult task when the number of samples is insufficient to reliably estimate the probability distribution. The Bayesian estimator of the property of interest requires the knowledge of the prior distribution, and in many situations, it is not clear which prior should be used. Several estimators have been developed so far in which the proposed prior us individually tailored for each property of interest; such is the case, for example, for the entropy, the amount of mutual information, or the correlation between pairs of variables. In this paper, we propose a general framework to select priors that is valid for arbitrary properties. We first demonstrate that only certain aspects of the prior distribution actually affect the inference process. We then expand the sought prior as a linear combination of a one-dimensional family of indexed priors, each of which is obtained through a maximum entropy approach with constrained mean values of the property under study. In many cases of interest, only one or very few components of the expansion turn out to contribute to the Bayesian estimator, so it is often valid to only keep a single component. The relevant component is selected by the data, so no handcrafted priors are required. We test the performance of this approximation with a few paradigmatic examples and show that it performs well in comparison to the ad-hoc methods previously proposed in the literature. Our method highlights the connection between Bayesian inference and equilibrium statistical mechanics, since the most relevant component of the expansion can be argued to be that with the right temperature.


2020 ◽  
Author(s):  
RuShan Gao ◽  
Karen H. Rosenlof

We use a simple model to derive a mortality probability distribution for a patient as a function of days since diagnosis (considering diagnoses made between 25 February and 29 March 2020). The peak of the mortality probability is the 13th day after diagnosis. The overall shape and peak location of this probability curve are similar to the onset-to-death probability distribution in a case study using Chinese data. The total mortality probability of a COVID-19 patient in the US diagnosed between 25 February and 29 March is about 21%. We speculate that this high value is caused by severe under-testing of the population to identify all COVID-19 patients. With this probability, and an assumption that the true probability is 2.4%, we estimate that 89% of all SARS-CoV-2 infection cases were not diagnosed during this period. When the same method is applied to data extended to 25 April, we found that the total mortality probability of a patient diagnosed in the US after 1 April is about 6.4%, significantly lower than for the earlier period. We attribute this drop to increasingly available tests. Given the assumption that the true mortality probability is 2.4%, we estimate that 63% of all SARS-CoV-2 infection cases were not diagnosed during this period (1 - 25 April).


Entropy ◽  
2021 ◽  
Vol 23 (9) ◽  
pp. 1122
Author(s):  
Serafín Moral ◽  
Andrés Cano ◽  
Manuel Gómez-Olmedo

Kullback–Leibler divergence KL(p,q) is the standard measure of error when we have a true probability distribution p which is approximate with probability distribution q. Its efficient computation is essential in many tasks, as in approximate computation or as a measure of error when learning a probability. In high dimensional probabilities, as the ones associated with Bayesian networks, a direct computation can be unfeasible. This paper considers the case of efficiently computing the Kullback–Leibler divergence of two probability distributions, each one of them coming from a different Bayesian network, which might have different structures. The paper is based on an auxiliary deletion algorithm to compute the necessary marginal distributions, but using a cache of operations with potentials in order to reuse past computations whenever they are necessary. The algorithms are tested with Bayesian networks from the bnlearn repository. Computer code in Python is provided taking as basis pgmpy, a library for working with probabilistic graphical models.


Fluids ◽  
2021 ◽  
Vol 6 (10) ◽  
pp. 343
Author(s):  
Nissrine Akkari ◽  
Fabien Casenave ◽  
Thomas Daniel ◽  
David Ryckelynck

Bayesian methods were studied in this paper using deep neural networks. We are interested in variational autoencoders, where an encoder approaches the true posterior and the decoder approaches the direct probability. Specifically, we applied these autoencoders for unsteady and compressible fluid flows in aircraft engines. We used inferential methods to compute a sharp approximation of the posterior probability of these parameters with the transient dynamics of the training velocity fields and to generate plausible velocity fields. An important application is the initialization of transient numerical simulations of unsteady fluid flows and large eddy simulations in fluid dynamics. It is known by the Bayes theorem that the choice of the prior distribution is very important for the computation of the posterior probability, proportional to the product of likelihood with the prior probability. Hence, we propose a new inference model based on a new prior defined by the density estimate with the realizations of the kernel proper orthogonal decomposition coefficients of the available training data. We numerically show that this inference model improves the results obtained with the usual standard normal prior distribution. This inference model was constructed using a new algorithm improving the convergence of the parametric optimization of the encoder probability distribution that approaches the posterior. This latter probability distribution is data-targeted, similarly to the prior distribution. This new generative approach can also be seen as an improvement of the kernel proper orthogonal decomposition method, for which we do not usually have a robust technique for expressing the pre-image in the input physical space of the stochastic reduced field in the feature high-dimensional space with a kernel inner product.


Sign in / Sign up

Export Citation Format

Share Document