A simple way to look at a Bayesian model for the statistics of earthquake prediction

1981 ◽  
Vol 71 (6) ◽  
pp. 1929-1931
Author(s):  
John G. Anderson

Abstract A simple urn model is presented for earthquake prediction statistics. This model is equivalent to the Bayesian models of Collins (1977), Guagenti and Scirocco (1980), and Kijko (1981).

2016 ◽  
Vol 27 (12) ◽  
pp. 1562-1572 ◽  
Author(s):  
Georgie Powell ◽  
Zoe Meredith ◽  
Rebecca McMillin ◽  
Tom C. A. Freeman

According to Bayesian models, perception and cognition depend on the optimal combination of noisy incoming evidence with prior knowledge of the world. Individual differences in perception should therefore be jointly determined by a person’s sensitivity to incoming evidence and his or her prior expectations. It has been proposed that individuals with autism have flatter prior distributions than do nonautistic individuals, which suggests that prior variance is linked to the degree of autistic traits in the general population. We tested this idea by studying how perceived speed changes during pursuit eye movement and at low contrast. We found that individual differences in these two motion phenomena were predicted by differences in thresholds and autistic traits when combined in a quantitative Bayesian model. Our findings therefore support the flatter-prior hypothesis and suggest that individual differences in prior expectations are more systematic than previously thought. In order to be revealed, however, individual differences in sensitivity must also be taken into account.


2019 ◽  
Author(s):  
Sean Tauber ◽  
Danielle Navarro ◽  
Amy Perfors ◽  
Mark Steyvers

Recent debates in the psychological literature have raised questions about the assumptions that underpin Bayesian models of cognition and what inferences they license about human cognition. In this paper we revisit this topic, arguing that there are 2 qualitatively different ways in which a Bayesian model could be constructed. The most common approach uses a Bayesian model as a normative standard upon which to license a claim about optimality. In the alternative approach, a descriptive Bayesian model need not correspond to any claim that the underlying cognition is optimal or rational, and is used solely as a tool for instantiating a substantive psychological theory. We present 3 case studies in which these 2 perspectives lead to different computational models and license different conclusions about human cognition. We demonstrate how the descriptive Bayesian approach can be used to answer different sorts of questions than the optimal approach, especially when combined with principled tools for model evaluation and model selection. More generally we argue for the importance of making a clear distinction between the 2 perspectives. Considerable confusion results when descriptive models and optimal models are conflated, and if Bayesians are to avoid contributing to this confusion it is important to avoid making normative claims when none are intended.


Radiocarbon ◽  
2010 ◽  
Vol 52 (4) ◽  
pp. 1667-1680 ◽  
Author(s):  
Israel Finkelstein ◽  
Eli Piasetzky

Mazar and Bronk Ramsey (2008) recently proposed that the Iron I/IIA transition in the Levant took place during the first half of the 10th century. In the first part of this article, we challenge their method and conclusions. We argue against the inclusion of charcoal in their model, which could lead to an “old wood effect.” We also argue that in dealing with a transition date, all available data must be taken into consideration. In the second part of the article, we propose Bayesian Model I for the Iron I/IIA transition, which is based on 2 sets of data—for the periods immediately before and after this transition. Our model, along with the other 11 published Bayesian models for this transition that used only short-lived samples, agrees with the Low Chronology system for the Iron Age strata in the Levant and negates all other proposals, including Mazar's Modified Conventional Chronology. The Iron I/IIA transition occurred during the second half of the 10th century. In the third part of the article, we present a new insight on the Iron I/IIA transition. We propose that the late Iron I cities came to an end in a gradual process and interpret this proposal with Bayesian Model II.Mazar and Bronk Ramsey (2008) recently challenged Sharon et al. (2007; also Boaretto et al. 2005) and us (e.g. Finkelstein and Piasetzky 2003, 2007a,b) regarding the date of transition from the Iron I to the Iron IIA in the Levant. While we and Sharon et al. placed this transition in the second half of the 10th century BCE, Mazar and Bronk Ramsey positioned it “during the first half of the 10th century BCE” and argued that “the second half of the 10th century BCE should be included in the Iron IIA” (Mazar and Bronk Ramsey 2008:178). We discuss some problems in the methodology of Mazar and Bronk Ramsey that may have influenced their results. In particular, we discuss 1) the exclusion of data; 2) the inclusion of data (charcoal samples); and 3) show that even according to Mazar and Bronk Ramsey, excluding these samples position the late Iron I/IIA transition in the late 10th century. Finally, we present our own 2 Bayesian models for the Iron I/IIA transition.


Synthese ◽  
2021 ◽  
Author(s):  
Leah Henderson ◽  
Alexander Gebharter

AbstractPsychological studies show that the beliefs of two agents in a hypothesis can diverge even if both agents receive the same evidence. This phenomenon of belief polarisation is often explained by invoking biased assimilation of evidence, where the agents’ prior views about the hypothesis affect the way they process the evidence. We suggest, using a Bayesian model, that even if such influence is excluded, belief polarisation can still arise by another mechanism. This alternative mechanism involves differential weighting of the evidence arising when agents have different initial views about the reliability of their sources of evidence. We provide a systematic exploration of the conditions for belief polarisation in Bayesian models which incorporate opinions about source reliability, and we discuss some implications of our findings for the psychological literature.


2016 ◽  
Author(s):  
Hilary Barth ◽  
Ellen Lesser ◽  
Jessica Taggart ◽  
Emily Slusser

A large collection of estimation phenomena (e.g., biases arising when adults or children estimate remembered locations of objects in bounded spaces; Huttenlocher, Newcombe, & Sandberg, 1994) are commonly explained in terms of complex Bayesian models. We provide evidence that some of these phenomena may be modeled instead by a simpler non-Bayesian alternative. Undergraduates and 9 to-10-year-olds completed a speeded linear position estimation task. Bias in both groups’ estimates could be explained in terms of a simple psychophysical model of proportion estimation. Moreover, some individual data were not compatible with the requirements of the more complex Bayesian model.


2011 ◽  
Vol 34 (4) ◽  
pp. 211-212 ◽  
Author(s):  
Timothy T. Rogers ◽  
Mark S. Seidenberg

AbstractWe distinguish between literal and metaphorical applications of Bayesian models. When intended literally, an isomorphism exists between the elements of representation assumed by the rational analysis and the mechanism that implements the computation. Thus, observation of the implementation can externally validate assumptions underlying the rational analysis. In other applications, no such isomorphism exists, so it is not clear how the assumptions that allow a Bayesian model to fit data can be independently validated.


1975 ◽  
Vol 4 (3) ◽  
pp. 245-250
Author(s):  
Kenneth Kaminsky ◽  
Eugene Luks ◽  
Paul Nelson
Keyword(s):  

1981 ◽  
Vol 20 (03) ◽  
pp. 174-178 ◽  
Author(s):  
A. I. Barnett ◽  
J. Cynthia ◽  
F. Jane ◽  
Nancy Gutensohn ◽  
B. Davies

A Bayesian model that provides probabilistic information about the spread of malignancy in a Hodgkin’s disease patient has been developed at the Tufts New England Medical Center. In assessing the model’s reliability, it seemed important to use it to make predictions about patients other than those relevant to its construction. The accuracy of these predictions could then be tested statistically. This paper describes such a test, based on 243 Hodgkin’s disease patients of known pathologic stage. The results obtained were supportive of the model, and the test procedure might interest those wishing to determine whether the imperfections that attend any attempt to make probabilistic forecasts have gravely damaged their accuracy.


1996 ◽  
Vol 35 (04/05) ◽  
pp. 309-316 ◽  
Author(s):  
M. R. Lehto ◽  
G. S. Sorock

Abstract:Bayesian inferencing as a machine learning technique was evaluated for identifying pre-crash activity and crash type from accident narratives describing 3,686 motor vehicle crashes. It was hypothesized that a Bayesian model could learn from a computer search for 63 keywords related to accident categories. Learning was described in terms of the ability to accurately classify previously unclassifiable narratives not containing the original keywords. When narratives contained keywords, the results obtained using both the Bayesian model and keyword search corresponded closely to expert ratings (P(detection)≥0.9, and P(false positive)≤0.05). For narratives not containing keywords, when the threshold used by the Bayesian model was varied between p>0.5 and p>0.9, the overall probability of detecting a category assigned by the expert varied between 67% and 12%. False positives correspondingly varied between 32% and 3%. These latter results demonstrated that the Bayesian system learned from the results of the keyword searches.


Sign in / Sign up

Export Citation Format

Share Document