bayes rule
Recently Published Documents


TOTAL DOCUMENTS

245
(FIVE YEARS 40)

H-INDEX

24
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Ola Hössjer ◽  
Daniel Andrés Díaz-Pachón ◽  
J. Sunil Rao

Philosophers frequently define knowledge as justified, true belief. In this paper we build a mathematical framework that makes possible to define learning (increased degree of true belief) and knowledge of an agent in precise ways. This is achieved by phrasing belief in terms of epistemic probabilities, defined from Bayes' Rule. The degree of true belief is then quantified by means of active information $I^+$, that is, a comparison between the degree of belief of the agent and a completely ignorant person. Learning has occurred when either the agent's strength of belief in a true proposition has increased in comparison with the ignorant person ($I^+>0$), or if the strength of belief in a false proposition has decreased ($I^+<0$). Knowledge additionally requires that learning occurs for the right reason, and in this context we introduce a framework of parallel worlds, of which one is true and the others are counterfactuals. We also generalize the framework of learning and knowledge acquisition to a sequential setting, where information and data is updated over time. The theory is illustrated using examples of coin tossing, historical events, future events, replication of studies, and causal inference.


Author(s):  
Carlos Alós-Ferrer ◽  
Alexander Jaudas ◽  
Alexander Ritschel

AbstractWhen confronted with new information, rational decision makers should update their beliefs through Bayes’ rule. In economics, however, new information often includes win-loss feedback (profits vs. losses, success vs. failure, upticks vs. downticks). Previous research using a well-established belief-updating paradigm shows that, in this case, reinforcement learning (focusing on past performance) creates high error rates, and increasing monetary incentives fails to elicit higher performance. But do incentives fail to increase effort, or rather does effort fail to increase performance? We use pupil dilation to show that higher incentives do result in increased cognitive effort, but the latter fails to translate into increased performance in this paradigm. The failure amounts to a “reinforcement paradox:” increasing incentives makes win-loss cues more salient, and hence effort is often misallocated in the form of an increased reliance on reinforcement processes. Our study also serves as an example of how pupil-dilation measurements can inform economics.


2021 ◽  
Author(s):  
Thomas W. Keelin ◽  
Ronald A. Howard

Users of probability distributions frequently need to convert data (empirical, simulated, or elicited) into a continuous probability distribution and to update that distribution when new data becomes available. Often, it is unclear which traditional probability distribution(s) to use, fitting to data is laborious and unsatisfactory, little insight emerges, and updating with Bayes rule is impractical. Here we offer an alternative -- a family of continuous probability distributions, fitting methods, and tools that: provide sufficient shape and boundedness flexibility to closely match virtually any probability distribution and most data sets; involve a single set of simple closed-form equations; stimulate potentially valuable insights when applied to empirical data; are simply fit to data with ordinary least squares; are easy to combine (as when weighting the opinion of multiple experts), and, under certain conditions, are easily updated in closed form according to Bayes rule when new data becomes available. The Bayesian updating method is presented in a way that is readily understandable as a fisherman updates his catch probabilities when changing the river on which he fishes. While metalog applications have been shown to improve decision-making, the methods and results herein are broadly applicable to virtually any use of continuous probability in any field of human endeavor. Diverse data sets may be explored and modeled in these new ways with freely available spreadsheets and tools.


Synthese ◽  
2021 ◽  
Author(s):  
Alexandru Baltag ◽  
Soroush Rafiee Rad ◽  
Sonja Smets

AbstractWe propose a new model for forming and revising beliefs about unknown probabilities. To go beyond what is known with certainty and represent the agent’s beliefs about probability, we consider a plausibility map, associating to each possible distribution a plausibility ranking. Beliefs are defined as in Belief Revision Theory, in terms of truth in the most plausible worlds (or more generally, truth in all the worlds that are plausible enough). We consider two forms of conditioning or belief update, corresponding to the acquisition of two types of information: (1) learning observable evidence obtained by repeated sampling from the unknown distribution; and (2) learning higher-order information about the distribution. The first changes only the plausibility map (via a ‘plausibilistic’ version of Bayes’ Rule), but leaves the given set of possible distributions essentially unchanged; the second rules out some distributions, thus shrinking the set of possibilities, without changing their plausibility ordering.. We look at stability of beliefs under either of these types of learning, defining two related notions (safe belief and statistical knowledge), as well as a measure of the verisimilitude of a given plausibility model. We prove a number of convergence results, showing how our agent’s beliefs track the true probability after repeated sampling, and how she eventually gains in a sense (statistical) knowledge of that true probability. Finally, we sketch the contours of a dynamic doxastic logic for statistical learning.


Synthese ◽  
2021 ◽  
Author(s):  
Richard Pettigrew

AbstractIn a series of papers over the past twenty years, and in a new book, Igor Douven (sometimes in collaboration with Sylvia Wenmackers) has argued that Bayesians are too quick to reject versions of inference to the best explanation that cannot be accommodated within their framework. In this paper, I survey their worries and attempt to answer them using a series of pragmatic and purely epistemic arguments that I take to show that Bayes’ Rule really is the only rational way to respond to your evidence.


Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 1021
Author(s):  
James Fullwood ◽  
Arthur J. Parzygnat

We provide a stochastic extension of the Baez–Fritz–Leinster characterization of the Shannon information loss associated with a measure-preserving function. This recovers the conditional entropy and a closely related information-theoretic measure that we call conditional information loss. Although not functorial, these information measures are semi-functorial, a concept we introduce that is definable in any Markov category. We also introduce the notion of an entropic Bayes’ rule for information measures, and we provide a characterization of conditional entropy in terms of this rule.


2021 ◽  
Vol 131 ◽  
pp. 158-160
Author(s):  
Martijn JL. Bours
Keyword(s):  

2021 ◽  
Vol 18 (1) ◽  
pp. 78-99
Author(s):  
Sulian Wang ◽  
Chen Wang

The present study aims to investigate the quality of quantile judgments on a quantity of interest that follows the lognormal distribution, which is skewed and bounded from below with a long right tail. We conduct controlled experiments in which subjects predict the losses from a future typhoon based on losses from past typhoons. Our experiments find underconfidence of the 50% prediction intervals, which is primarily driven by overestimation of the 75th percentiles. We further perform exploratory analyses to disentangle sampling errors and judgmental biases in the overall miscalibration. Finally, we show that the correlations of log-transformed judgments between subjects are smaller than is justified by the information overlapping structure. It leads to overconfident aggregate predictions using the Bayes rule if we treat the low correlations as an indicator for independent information.


Sign in / Sign up

Export Citation Format

Share Document