scholarly journals Bayesian Statistical Methods in Psychology

Psychology ◽  
2021 ◽  
Author(s):  
Sarah Depaoli

The use of Bayesian statistics within psychology is on the rise, and this trajectory will likely continue to accelerate in the coming years. There are many different reasons why a researcher may want to implement Bayesian methodology. First, there are cases where models are too “complex” for traditional (frequentist) methods to handle. Second, Bayesian methods are sometimes preferred if only small samples are available, since the use of priors can improve estimation accuracy with minimal data. Third, the researcher may prefer to include background information in the estimation process, and this can be done via the priors. Finally, Bayesian methods produce results that are rich with detail and can be more informative about the population parameters. Specifically, information surrounding the entire posterior distribution is provided through Bayesian estimation, as opposed to a point estimate obtained through traditional (frequentist) methods. All of these reasons make Bayesian methods attractive to the psychological sciences. This bibliography begins with a section on General Overviews, which presents works that provide general introductions to Bayesian methods. A subsection within this overview section covers Papers Introducing Bayesian Methods to Subfields in Psychology, and a second subsection includes Resources for Particular Model Types Popular in Psychological Research. Next, some of the more comprehensive Bayesian Textbooks are presented, and this is followed by a treatment of the Philosophy that underlies Bayesian statistics. The next section is Markov Chain Monte Carlo and Samplers. One of the most common tools for Bayesian estimation is the Markov chain Monte Carlo (MCMC) algorithm. MCMC is used to construct chains through samplers, and these chains represent draws from the posterior. A subsection on Convergence is included here to highlight the importance of assessing Markov chain convergence. This is followed by a section on Prior Distributions, which includes subsections on Expert Elicitation of Priors and the Data-Prior Conflict. A section on Software Resources is presented, which covers some of the main software programs implementing Bayesian statistical modeling. Finally, a section on Model Assessment and Fit is presented. Each of these sections and subsections were selected to highlight an understanding of Bayesian statistics, the role it plays in psychology, and proper implementation.

Geophysics ◽  
2019 ◽  
Vol 84 (6) ◽  
pp. R1003-R1020 ◽  
Author(s):  
Georgia K. Stuart ◽  
Susan E. Minkoff ◽  
Felipe Pereira

Bayesian methods for full-waveform inversion allow quantification of uncertainty in the solution, including determination of interval estimates and posterior distributions of the model unknowns. Markov chain Monte Carlo (MCMC) methods produce posterior distributions subject to fewer assumptions, such as normality, than deterministic Bayesian methods. However, MCMC is computationally a very expensive process that requires repeated solution of the wave equation for different velocity samples. Ultimately, a large proportion of these samples (often 40%–90%) is rejected. We have evaluated a two-stage MCMC algorithm that uses a coarse-grid filter to quickly reject unacceptable velocity proposals, thereby reducing the computational expense of solving the velocity inversion problem and quantifying uncertainty. Our filter stage uses operator upscaling, which provides near-perfect speedup in parallel with essentially no communication between processes and produces data that are highly correlated with those obtained from the full fine-grid solution. Four numerical experiments demonstrate the efficiency and accuracy of the method. The two-stage MCMC algorithm produce the same results (i.e., posterior distributions and uncertainty information, such as medians and highest posterior density intervals) as the Metropolis-Hastings MCMC. Thus, no information needed for uncertainty quantification is compromised when replacing the one-stage MCMC with the more computationally efficient two-stage MCMC. In four representative experiments, the two-stage method reduces the time spent on rejected models by one-third to one-half, which is important because most of models tried during the course of the MCMC algorithm are rejected. Furthermore, the two-stage MCMC algorithm substantially reduced the overall time-per-trial by as much as 40%, while increasing the acceptance rate from 9% to 90%.


Sign in / Sign up

Export Citation Format

Share Document