true probability
Recently Published Documents


TOTAL DOCUMENTS

32
(FIVE YEARS 10)

H-INDEX

8
(FIVE YEARS 0)

Author(s):  
Giovanni Immordino ◽  
Anna Maria C. Menichini ◽  
Maria Grazia Romano

AbstractIn a setting in which an agent has a behavioral bias that causes an underestimation or an overestimation of the health consequences of sin goods consumption, the paper studies how a social planner can affect the demand of such goods through education and taxation. When only optimistic consumers are present, depending on the elasticity of demand of the sin good with respect to taxation, the two instruments can be substitutes or complements. When consumers are heterogeneous, the correcting effect that taxation has on optimistic consumers has unintended distorting effects on both pessimistic and rational ones. In this framework, educational measures, by aligning biased consumers’ perceptions closer to the true probability of health damages, are more effective than taxation.


Synthese ◽  
2021 ◽  
Author(s):  
Alexandru Baltag ◽  
Soroush Rafiee Rad ◽  
Sonja Smets

AbstractWe propose a new model for forming and revising beliefs about unknown probabilities. To go beyond what is known with certainty and represent the agent’s beliefs about probability, we consider a plausibility map, associating to each possible distribution a plausibility ranking. Beliefs are defined as in Belief Revision Theory, in terms of truth in the most plausible worlds (or more generally, truth in all the worlds that are plausible enough). We consider two forms of conditioning or belief update, corresponding to the acquisition of two types of information: (1) learning observable evidence obtained by repeated sampling from the unknown distribution; and (2) learning higher-order information about the distribution. The first changes only the plausibility map (via a ‘plausibilistic’ version of Bayes’ Rule), but leaves the given set of possible distributions essentially unchanged; the second rules out some distributions, thus shrinking the set of possibilities, without changing their plausibility ordering.. We look at stability of beliefs under either of these types of learning, defining two related notions (safe belief and statistical knowledge), as well as a measure of the verisimilitude of a given plausibility model. We prove a number of convergence results, showing how our agent’s beliefs track the true probability after repeated sampling, and how she eventually gains in a sense (statistical) knowledge of that true probability. Finally, we sketch the contours of a dynamic doxastic logic for statistical learning.


Entropy ◽  
2021 ◽  
Vol 23 (9) ◽  
pp. 1122
Author(s):  
Serafín Moral ◽  
Andrés Cano ◽  
Manuel Gómez-Olmedo

Kullback–Leibler divergence KL(p,q) is the standard measure of error when we have a true probability distribution p which is approximate with probability distribution q. Its efficient computation is essential in many tasks, as in approximate computation or as a measure of error when learning a probability. In high dimensional probabilities, as the ones associated with Bayesian networks, a direct computation can be unfeasible. This paper considers the case of efficiently computing the Kullback–Leibler divergence of two probability distributions, each one of them coming from a different Bayesian network, which might have different structures. The paper is based on an auxiliary deletion algorithm to compute the necessary marginal distributions, but using a cache of operations with potentials in order to reuse past computations whenever they are necessary. The algorithms are tested with Bayesian networks from the bnlearn repository. Computer code in Python is provided taking as basis pgmpy, a library for working with probabilistic graphical models.


Synthese ◽  
2021 ◽  
Author(s):  
Theo A. F. Kuipers

AbstractTheories of truth approximation in terms of truthlikeness (or verisimilitude) almost always deal with (non-probabilistically) approaching deterministic truths, either actual or nomic. This paper deals first with approaching a probabilistic nomic truth, viz. a true probability distribution. It assumes a multinomial probabilistic context, hence with a lawlike true, but usually unknown, probability distribution. We will first show that this true multinomial distribution can be approached by Carnapian inductive probabilities. Next we will deal with the corresponding deterministic nomic truth, that is, the set of conceptually possible outcomes with a positive true probability. We will introduce Hintikkian inductive probabilities, based on a prior distribution over the relevant deterministic nomic theories and on conditional Carnapian inductive probabilities, and first show that they enable again probabilistic approximation of the true distribution. Finally, we will show, in terms of a kind of success theorem, based on Niiniluoto’s estimated distance from the truth, in what sense Hintikkian inductive probabilities enable the probabilistic approximation of the relevant deterministic nomic truth. In sum, the (realist) truth approximation perspective on Carnapian and Hintikkian inductive probabilities leads to the unification of the inductive probability field and the field of truth approximation.


2020 ◽  
Author(s):  
RuShan Gao ◽  
Karen H. Rosenlof

We use a simple model to derive a mortality probability distribution for a patient as a function of days since diagnosis (considering diagnoses made between 25 February and 29 March 2020). The peak of the mortality probability is the 13th day after diagnosis. The overall shape and peak location of this probability curve are similar to the onset-to-death probability distribution in a case study using Chinese data. The total mortality probability of a COVID-19 patient in the US diagnosed between 25 February and 29 March is about 21%. We speculate that this high value is caused by severe under-testing of the population to identify all COVID-19 patients. With this probability, and an assumption that the true probability is 2.4%, we estimate that 89% of all SARS-CoV-2 infection cases were not diagnosed during this period. When the same method is applied to data extended to 25 April, we found that the total mortality probability of a patient diagnosed in the US after 1 April is about 6.4%, significantly lower than for the earlier period. We attribute this drop to increasingly available tests. Given the assumption that the true mortality probability is 2.4%, we estimate that 63% of all SARS-CoV-2 infection cases were not diagnosed during this period (1 - 25 April).


Author(s):  
Baisravan HomChaudhuri

Abstract This paper focuses on distributionally robust controller design for avoiding dynamic and stochastic obstacles whose exact probability distribution is unknown. The true probability distribution of the disturbance associated with an obstacle, although unknown, is considered to belong to an ambiguity set that includes all the probability distributions that share the same first two moment. The controller thus focuses on ensuring the satisfaction of the probabilistic collision avoidance constraints for all probability distributions in the ambiguity set, hence making the solution robust to the true probability distribution of the stochastic obstacles. Techniques from robust optimization methods are used to model the distributionally robust probabilistic or chance constraints as a semi-definite programming (SDP) problem with linear matrix inequality (LMI) constraints that can be solved in a computationally tractable fashion. Simulation results for a robot obstacle avoidance problem shows the efficacy of our method.


Author(s):  
Jianing Li ◽  
Yanyan Lan ◽  
Jiafeng Guo ◽  
Jun Xu ◽  
Xueqi Cheng

Neural language models based on recurrent neural networks (RNNLM) have significantly improved the performance for text generation, yet the quality of generated text represented by Turing Test pass rate is still far from satisfying. Some researchers propose to use adversarial training or reinforcement learning to promote the quality, however, such methods usually introduce great challenges in the training and parameter tuning processes. Through our analysis, we find the problem of RNNLM comes from the usage of maximum likelihood estimation (MLE) as the objective function, which requires the generated distribution to precisely recover the true distribution. Such requirement favors high generation diversity which restricted the generation quality. This is not suitable when the overall quality is low, since high generation diversity usually indicates lot of errors rather than diverse good samples. In this paper, we propose to achieve differentiated distribution recovery, DDR for short. The key idea is to make the optimal generation probability proportional to the β-th power of the true probability, where β > 1. In this way, the generation quality can be greatly improved by sacrificing diversity from noises and rare patterns. Experiments on synthetic data and two public text datasets show that our DDR method achieves more flexible quality-diversity trade-off and higher Turing Test pass rate, as compared with baseline methods including RNNLM, SeqGAN and LeakGAN.


Sign in / Sign up

Export Citation Format

Share Document