scholarly journals BAT.jl: A Julia-Based Tool for Bayesian Inference

2021 ◽  
Vol 2 (3) ◽  
Author(s):  
Oliver Schulz ◽  
Frederik Beaujean ◽  
Allen Caldwell ◽  
Cornelius Grunwald ◽  
Vasyl Hafych ◽  
...  

AbstractWe describe the development of a multi-purpose software for Bayesian statistical inference, BAT.jl, written in the Julia language. The major design considerations and implemented algorithms are summarized here, together with a test suite that ensures the proper functioning of the algorithms. We also give an extended example from the realm of physics that demonstrates the functionalities of BAT.jl.

2019 ◽  
Author(s):  
Jan Sprenger

The replication crisis poses an enormous challenge to the epistemic authority of science and the logic of statistical inference in particular. Two prominent features of Null Hypothesis Significance Testing (NHST) arguably contribute to the crisis: the lack of guidance for interpreting non-significant results and the impossibility of quantifying support for the null hypothesis. In this paper, I argue that also popular alternatives to NHST, such as confidence intervals and Bayesian inference, do not lead to a satisfactory logic of evaluating hypothesis tests. As an alternative, I motivate and explicate the concept of corroboration of the null hypothesis. Finally I show how degrees of corroboration give an interpretation to non-significant results, combat publication bias and mitigate the replication crisis.


2021 ◽  
Author(s):  
Alexander Kanonirov ◽  
Ksenia Balabaeva ◽  
Sergey Kovalchuk

The relevance of this study lies in improvement of machine learning models understanding. We present a method for interpreting clustering results and apply it to the case of clinical pathways modeling. This method is based on statistical inference and allows to get the description of the clusters, determining the influence of a particular feature on the difference between them. Based on the proposed approach, it is possible to determine the characteristic features for each cluster. Finally, we compare the method with the Bayesian inference explanation and with the interpretation of medical experts [1].


Author(s):  
Jan Sprenger

Bayesianism and frequentism are the two grand schools of statistical inference, divided by fundamentally different philosophical assumptions and mathematical methods. Bayesian inference models the subjective credibility of a hypothesis given a body of evidence, whereas frequentists focus on the reliability of inferential procedures. This chapter gives an overview of the principles, varieties and criticisms of Bayesianism and frequentism, compares both schools, taking in an examination of Deborah Mayo’s account of frequentism, an innovative proposal in which she presented as crucial the concept of degrees of severity; and applies them to salient topics in scientific inference, such as p-values, confidence intervals and optional stopping. author OK


1994 ◽  
Vol 88 (2) ◽  
pp. 412-423 ◽  
Author(s):  
Bruce Western ◽  
Simon Jackman

Regression analysis in comparative research suffers from two distinct problems of statistical inference. First, because the data constitute all the available observations from a population, conventional inference based on the long-run behavior of a repeatable data mechanism is not appropriate. Second, the small and collinear data sets of comparative research yield imprecise estimates of the effects of explanatory variables. We describe a Bayesian approach to statistical inference that provides a unified solution to these two problems. This approach is illustrated in a comparative analysis of unionization.


Acta Acustica ◽  
2021 ◽  
Vol 5 ◽  
pp. 45
Author(s):  
Glen McLachlan ◽  
Piotr Majdak ◽  
Jonas Reijniers ◽  
Herbert Peremans

Over the decades, Bayesian statistical inference has become a staple technique for modelling human multisensory perception. Many studies have successfully shown how sensory and prior information can be combined to optimally interpret our environment. Because of the multiple sound localisation cues available in the binaural signal, sound localisation models based on Bayesian inference are a promising way of explaining behavioural human data. An interesting aspect is the consideration of dynamic localisation cues obtained through self-motion. Here we provide a review of the recent developments in modelling dynamic sound localisation with a particular focus on Bayesian inference. Further, we describe a theoretical Bayesian framework capable to model dynamic and active listening situations in humans in a static auditory environment. In order to demonstrate its potential in future implementations, we provide results from two examples of simplified versions of that framework.


2020 ◽  
Author(s):  
Noah N'Djaye Nikolai van Dongen ◽  
Eric-Jan Wagenmakers ◽  
Jan Sprenger

A tradition that goes back to Karl R. Popper assesses the value of a statistical test primarily by its severity: was it a honest and stringent attempt to prove the theory wrong? For "error statisticians" such as Deborah Mayo (1996, 2018), and frequentists more generally, severity is a key virtue in hypothesis tests. Conversely, failure to incorporate severity into statistical inference, as it allegedly happens in Bayesian inference, counts as a major methodological shortcoming. Our paper pursues a double goal: First, we argue that the error-statistical explication of severity has substantive drawbacks (i.e., neglect of research context; lack of connection to specificity of predictions; problematic similarity of degrees of severity to one-sided p-values). Second, we argue that severity matters for Bayesian inference via the value of specific, risky predictions: severity boosts the expected evidential value of a Bayesian hypothesis test. We illustrate severity-based reasoning in Bayesian statistics by means of a practical example and discuss its advantages and potential drawbacks.


2019 ◽  
Author(s):  
Jean-Philippe Bernardy ◽  
Rasmus Blanck ◽  
Stergios Chatzikyriakidis ◽  
Shalom Lappin ◽  
Aleksandre Maskharashvili

Author(s):  
Jan Sprenger ◽  
Stephan Hartmann

Subjective Bayesianism is often criticized for a lack of objectivity: (i) it opens the door to the influence of values and biases, (ii) evidence judgments can vary substantially between scientists, (iii) it is not suited for informing policy decisions. We rebut these concerns by bridging the debates on scientific objectivity and Bayesian inference in statistics. First, we show that the above concerns arise equally for frequentist statistical inference. Second, we argue that the involved senses of objectivity are epistemically inert. Third, we show that Subjective Bayesianism promotes other, epistemically relevant senses of scientific objectivity—most notably by increasing the transparency of scientific reasoning.


1976 ◽  
Vol 8 (7) ◽  
pp. 741-752 ◽  
Author(s):  
E S Sheppard

The framework of Bayesian inference is proposed as a structure for unifying those highly disparate approaches to entropy modelling that have appeared in geography to date, and is used to illuminate the possibilities and shortcomings of some of these models. The inadequacy of most descriptive entropy statistics for measuring the information in a spatially-autocorrelated map is described. The contention that entropy maximization in itself provides theoretical justification for spatial models is critically evaluated. It is concluded that entropy should, first and foremost, be regarded as a technique to expand our methods of statistical inference and hypothesis testing, rather than one of theory construction.


Sign in / Sign up

Export Citation Format

Share Document