scholarly journals A Bayesian Account of Memory for Text

2019 ◽  
Author(s):  
Mark Andrews

The study of memory for texts has had an long tradition of research in psychology. According to most general accounts, the recognition or recall of items in a text is based on querying a memory representation that is built up on the basis of background knowledge. The objective of this paper is to describe and thoroughly test a Bayesian model of these general accounts. In particular, we present a model that describes how we use our background knowledge to form memories in terms of Bayesian inference of statistical patterns in the text, followed by posterior predictive inference of the words that are typical of those inferred patterns. This provides us with precise predictions about which words will be remembered, whether veridically or erroneously, from any given text. We tested these predictions using behavioural data from a memory experiment using a large sample of randomly chosen texts from a representative corpus of British English. The results show that the probability of remembering any given word in the text, whether falsely or veridically, is well predicted by the Bayesian model. Moreover, compared to nontrivial alternative models of text memory, by every measure used in the analyses, the predictions of the Bayesian model were superior, often overwhelmingly so. We conclude that these results provide strong evidence in favour of the Bayesian account of text memory that we have presented in this paper.

2021 ◽  
Author(s):  
Dmytro Perepolkin ◽  
Benjamin Goodrich ◽  
Ullrika Sahlin

This paper extends the application of indirect Bayesian inference to probability distributions defined in terms of quantiles of the observable quantities. Quantile-parameterized distributions are characterized by high shape flexibility and interpretability of its parameters, and are therefore useful for elicitation on observables. To encode uncertainty in the quantiles elicited from experts, we propose a Bayesian model based on the metalog distribution and a version of the Dirichlet prior. The resulting “hybrid” expert elicitation protocol for characterizing uncertainty in parameters using questions about the observable quantities is discussed and contrasted to parametric and predictive elicitation.


Entropy ◽  
2020 ◽  
Vol 22 (8) ◽  
pp. 890
Author(s):  
Sergey Oladyshkin ◽  
Farid Mohammadi ◽  
Ilja Kroeker ◽  
Wolfgang Nowak

Gaussian process emulators (GPE) are a machine learning approach that replicates computational demanding models using training runs of that model. Constructing such a surrogate is very challenging and, in the context of Bayesian inference, the training runs should be well invested. The current paper offers a fully Bayesian view on GPEs for Bayesian inference accompanied by Bayesian active learning (BAL). We introduce three BAL strategies that adaptively identify training sets for the GPE using information-theoretic arguments. The first strategy relies on Bayesian model evidence that indicates the GPE’s quality of matching the measurement data, the second strategy is based on relative entropy that indicates the relative information gain for the GPE, and the third is founded on information entropy that indicates the missing information in the GPE. We illustrate the performance of our three strategies using analytical- and carbon-dioxide benchmarks. The paper shows evidence of convergence against a reference solution and demonstrates quantification of post-calibration uncertainty by comparing the introduced three strategies. We conclude that Bayesian model evidence-based and relative entropy-based strategies outperform the entropy-based strategy because the latter can be misleading during the BAL. The relative entropy-based strategy demonstrates superior performance to the Bayesian model evidence-based strategy.


2011 ◽  
Vol 128-129 ◽  
pp. 637-641
Author(s):  
Lan Luo ◽  
Qiong Hai Dai ◽  
Chun Xiang Xu ◽  
Shao Quan Jiang

The cipher algorithms are categorized by block cipher, stream cipher and HASH, and they are weighed in faithful transmission which is known as independent condition. In faithful transmission, the ciphers are studied because of their root cipher. Intelligent application of ciphers is a direction that uses Bayesian model of cognition science. Bayesian inference is a rational engine for solving such problems within a probabilistic framework, and consequently is the heart of most probabilistic models of weighing the ciphers. The approach of this paper is that ciphers, which are considered as a suitable weight cipher to kinds of networks, are ranged based on root ciphers. This paper shows the other kinds of transformation among the different cipher algorithms themselves.


2005 ◽  
Vol 08 (01) ◽  
pp. 1-12 ◽  
Author(s):  
FRANCISCO VENEGAS-MARTÍNEZ

This paper develops a Bayesian model for pricing derivative securities with prior information on volatility. Prior information is given in terms of expected values of levels and rates of precision: the inverse of variance. We provide several approximate formulas, for valuing European call options, on the basis of asymptotic and polynomial approximations of Bessel functions.


2003 ◽  
Vol 15 (5) ◽  
pp. 1063-1088 ◽  
Author(s):  
James M. Coughlan ◽  
A. L. Yuille

This letter argues that many visual scenes are based on a “Manhattan” three-dimensional grid that imposes regularities on the image statistics. We construct a Bayesian model that implements this assumption and estimates the viewer orientation relative to the Manhattan grid. For many images, these estimates are good approximations to the viewer orientation (as estimated manually by the authors). These estimates also make it easy to detect outlier structures that are unaligned to the grid. To determine the applicability of the Manhattan world model, we implement a null hypothesis model that assumes that the image statistics are independent of any three-dimensional scene structure. We then use the log-likelihood ratio test to determine whether an image satisfies the Manhattan world assumption. Our results show that if an image is estimated to be Manhattan, then the Bayesian model's estimates of viewer direction are almost always accurate (according to our manual estimates), and vice versa.


2019 ◽  
Vol 10 (2) ◽  
pp. 691-707
Author(s):  
Jason C. Doll ◽  
Stephen J. Jacquemin

Abstract Researchers often test ecological hypotheses relating to a myriad of questions ranging from assemblage structure, population dynamics, demography, abundance, growth rate, and more using mathematical models that explain trends in data. To aid in the evaluation process when faced with competing hypotheses, we employ statistical methods to evaluate the validity of these multiple hypotheses with the goal of deriving the most robust conclusions possible. In fisheries management and ecology, frequentist methodologies have largely dominated this approach. However, in recent years, researchers have increasingly used Bayesian inference methods to estimate model parameters. Our aim with this perspective is to provide the practicing fisheries ecologist with an accessible introduction to Bayesian model selection. Here we discuss Bayesian inference methods for model selection in the context of fisheries management and ecology with empirical examples to guide researchers in the use of these methods. In this perspective we discuss three methods for selecting among competing models. For comparing two models we discuss Bayes factor and for more complex models we discuss Watanabe–Akaike information criterion and leave-one-out cross-validation. We also describe what kinds of information to report when conducting Bayesian inference. We conclude this review with a discussion of final thoughts about these model selection techniques.


Author(s):  
Jan Sprenger ◽  
Stephan Hartmann

Is simplicity a virtue of a good scientific theory, and are simpler theories more likely to be true or predictively successful? If so, how much should simplicity count vis-à-vis predictive accuracy? We address this question using Bayesian inference, focusing on the context of statistical model selection and an interpretation of simplicity via the degree of freedoms of a model. We rebut claims to prove the epistemic value of simplicity by means of showing its particular role in Bayesian model selection strategies (e.g., the BIC or the MML). Instead, we show that Bayesian inference in the context of model selection is usually done in a philosophically eclectic, instrumental fashion that is more tuned to practical applications than to philosophical foundations. Thus, these techniques cannot justify a particular “appropriate weight of simplicity in model selection”.


1990 ◽  
Vol 71 (1) ◽  
pp. 307-320
Author(s):  
David J. Johnstone

The Bayesian inference based on the information “significant at .05” depends logically on the sample size, n. If n is sufficiently large, the locution “significant at .05,” taken by itself, implies not strong evidence against the null hypothesis but strong evidence in its favor. More particularly, for large n, a report which says merely “significant at .05,” without further information, should be interpreted as evidence against the null only if for some reason peculiar to the test in question it is considered subjectively that the sample observation x is very probably significant not only at 5% but at 1% or lower. This result holds for any “point” (simple) null hypothesis and is demonstrated here in the context of a simple example. Note that for the purpose of interpreting the expression “significant at .05” per se, it is supposed that the exact value of x is unknown.


Entropy ◽  
2019 ◽  
Vol 21 (11) ◽  
pp. 1081 ◽  
Author(s):  
Sergey Oladyshkin ◽  
Wolfgang Nowak

We show a link between Bayesian inference and information theory that is useful for model selection, assessment of information entropy and experimental design. We align Bayesian model evidence (BME) with relative entropy and cross entropy in order to simplify computations using prior-based (Monte Carlo) or posterior-based (Markov chain Monte Carlo) BME estimates. On the one hand, we demonstrate how Bayesian model selection can profit from information theory to estimate BME values via posterior-based techniques. Hence, we use various assumptions including relations to several information criteria. On the other hand, we demonstrate how relative entropy can profit from BME to assess information entropy during Bayesian updating and to assess utility in Bayesian experimental design. Specifically, we emphasize that relative entropy can be computed avoiding unnecessary multidimensional integration from both prior and posterior-based sampling techniques. Prior-based computation does not require any assumptions, however posterior-based estimates require at least one assumption. We illustrate the performance of the discussed estimates of BME, information entropy and experiment utility using a transparent, non-linear example. The multivariate Gaussian posterior estimate includes least assumptions and shows the best performance for BME estimation, information entropy and experiment utility from posterior-based sampling.


Author(s):  
Erdinç Sayan

My focus in this paper is on how the basic Bayesian model can be amended to reflect the role of idealizations and approximations in the confirmation or disconfirmation of any hypothesis. I suggest the following as a plausible way of incorporating idealizations and approximations into the Bayesian condition for incremental confirmation: Theory T is confirmed by observation P relative to background knowledge B iff Pr(PΔ│T&(T&I ├ PT)&B) > Pr(PΔ│~T&(T&I├PT)andB), where I is the conjunction of idealizations and approximations used in deriving the prediction PT from T, P􀀧 expresses the discrepancy between the prediction PT and the actual observation P, and ├ stands for logical entailment. This formulation has the virtue of explicitly taking into account the essential use made of idealizations and approximations as well as the fact that theoretically based predictions that utilize such assumptions will not, in general, exactly fit the data. A non-probabilistic analogue of the confirmation condition above that I offer avoids the 'old evidence problem,' which has been a headache for classical Bayesianism.


Sign in / Sign up

Export Citation Format

Share Document