bayesian data analysis
Recently Published Documents


TOTAL DOCUMENTS

137
(FIVE YEARS 47)

H-INDEX

16
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Derek Powell

Bayesian theories of cognitive science hold that cognition is fundamentally probabilistic, but people’s explicit probability judgments often violate the laws of probability. Two recent proposals, the “Probability Theory plus Noise” (Costello & Watts, 2014) and “Bayesian Sampler” (Zhu et al., 2020) theories of probability judgments, both seek to account for these biases while maintaining that mental credences are fundamentally probabilistic. These theories fit quite differently into the larger project of Bayesian cognitive science, but their many similarities complicate comparisons of their predictive accuracy. In particular, comparing the models demands a careful accounting of model complexity. Here, I cast these theories into a Bayesian data analysis framework that supports principled model comparison using information criteria. Comparing the fits of both models on data collected by Zhu and colleagues (2020) I find the data are best explained by a modified version of the Bayesian Sampler model under which people may hold informative priors about probabilities.


Author(s):  
Rolf Behrens ◽  
Hayo Zutz ◽  
Julian Busse

Abstract The energy distribution (spectrum) of pulsed photon radiation can hardly be measured using active devices, therefore, a thermoluminescence detector (TLD)-based few channel spectrometer is used in combination with a Bayesian data analysis to help resolve this problem. The spectrometer consists of 30 TLD layers interspaced by absorbers made of plastics and metals with increasing atomic numbers and thickness. Thus, the main idea behind the device is the deeper the radiation penetrates - the higher the radiation’s energy when the radiation impinges perpendicular to the front of the spectrometer. From the doses measured in the TLD layers and from further prior available information, the photon spectrum is deduced using a Bayesian data analysis leading to absolute spectra and doses including their uncertainties and coverage intervals. This spectrometer was successfully used in two different scenarios, i.e., for the spectrometry of the radiation field two different industrial type open beam pulsed X ray generators and secondly in three different radiation fields of a medical accelerator.


2021 ◽  
Author(s):  
Michael Höfler

Bayesian data analysis allows a researcher to assess whether a claim about an effect (e.g. effect > 0, effect > Δ, |effect| < Δ)) is justified given the data and a prior distribution, expressing her or his personal belief before seeing the data. However, the recipients of the analysis might use different priors, so it remains unclear whether they would share the claim. "Reverse Bayes" analysis and the "sufficiently sceptical prior" address this problem by asking how strongly one may believe in the absence of an effect in order to be convinced otherwise by the data. A method called "Region of Evidence" is presented that takes this idea and extends it for any normal prior (and a normally distributed estimate). It visualises all the priors that, if they had been used, would support the claim, including those that favour a positive or negative effect. Since the method depends only on an estimate and its standard error, it can be easily applied to previously published results. The paper describes the method and its implementation in a new Stata command called arevi, which can be freely used and modified.


2021 ◽  
Author(s):  
Guilherme D. Garcia ◽  
Ronaldo Mangueira Lima Jr

Neste artigo, apresentamos os conceitos básicos de uma análise estatística bayesiana e demonstramos como rodar um modelo de regressão utilizando a linguagem R. Ao longo do artigo, comparamos estatística bayesiana e estatística frequentista, destacamos as diferentes vantagens apresentadas por uma abordagem bayesiana, e demonstramos como rodar um modelo simples e visualizar efeitos de interesse. Por fim, sugerimos leituras adicionais aos interessados neste tipo de análise.In this paper, we introduce the basics of Bayesian data analysis and demonstrate how to run a regression model in R using linguistic data. Throughout the paper, we compare Bayesian and Frequentist statistics, highlighting the different advantages of a Bayesian approach. We also show how to run a simple model and how to visualize effects of interest. Finally, we suggest additional readings to those interested in Bayesian analysis more generally.


2021 ◽  
Author(s):  
Shravan Vasishth ◽  
Himanshu Yadav ◽  
Daniel Schad ◽  
Bruno Nicenboim

Although Bayesian data analysis has the great advantage that one need not specify the sample size in advance of running an experiment, there are nevertheless situations where it becomes necessary to have at least an initial ballpark estimate for a target sample size. An example where this becomes necessary is grant applications. In this paper, we adapt a simulation-based method proposed by Wang and Gelfand, 2002 (A simulation-based approach to Bayesian sample size determination for performance under a given model and for separating models. Statistical Science, 193-208) for a Bayes-factor based design analysis. We demonstrate how relatively complex hierarchical models (which are commonly used in psycholinguistics) can be used to determine approximate sample sizes for planning experiments. The code is available for researchers to adapt for their own purposes and applications at https://osf.io/hjgrm/.


2021 ◽  
Author(s):  
Ian Barr

The microscopic rate constants that govern an enzymatic reaction are only directly measured under certain experimental set-ups, such as stopped flow, continuous flow, or temperature-jump assays; the majority of enzymology proceeds from steady state conditions which leads to a set of easily-observable parameters such as kcat, KM, and observed Kinetic Isotope Effects Dkcat. This paper further develops a model to estimate microscopic rate constants from steady-state data for a set of reversible, four-step reactions. This paper uses the Bayesian modeling software Stan, and demonstrates the benefits of Bayesian data analysis in the estimation of these rate constants. In contrast to the optimization methods employed often in the estimation of kinetic constants, a Bayesian treatment is more equipped to estimate the uncertainties of each parameter; sampling from the posterior distribution using Hamiltonian Monte Carlo immediately gives parameter estimates as mean or median of the posterior, and also Confidence Intervals that express the uncertainty of each parameter.


2021 ◽  
Author(s):  
Minh-Hoang Nguyen

Given the reproducibility crisis (or replication crisis), more psychologists and social-cultural scientists are getting involved with Bayesian inference. Therefore, the current article provides a brief overview of programs (or software) and steps to conduct Bayesian data analysis in social sciences.


2021 ◽  
Author(s):  
Todd E. Hudson

This textbook bypasses the need for advanced mathematics by providing in-text computer code, allowing students to explore Bayesian data analysis without the calculus background normally considered a prerequisite for this material. Now, students can use the best methods without needing advanced mathematical techniques. This approach goes beyond “frequentist” concepts of p-values and null hypothesis testing, using the full power of modern probability theory to solve real-world problems. The book offers a fully self-contained course, which demonstrates analysis techniques throughout with worked examples crafted specifically for students in the behavioral and neural sciences. The book presents two general algorithms that help students solve the measurement and model selection (also called “hypothesis testing”) problems most frequently encountered in real-world applications.


Sign in / Sign up

Export Citation Format

Share Document