frequentist statistics
Recently Published Documents


TOTAL DOCUMENTS

44
(FIVE YEARS 23)

H-INDEX

6
(FIVE YEARS 3)

Author(s):  
Danilo Quagliotti

Abstract The assessment of the systematic behavior based on frequentist statistics was analyzed in the context of micro/nano metrology. The proposed method is in agreement with the well-known GUM recommendations. The investigation assessed three different case studies with definition of model equations and establishment of the traceability. The systematic behavior was modeled in Sq roughness parameters and step height measurements obtained from different types of optical microscopes, and in comparison with a calibrated contact instrument. The sequence of case studies demonstrated the applicability of the method to micrographs when their elements are averaged. Moreover, a number of influence factors, which are typical causes of inaccuracy at the micro and nano length scales, were analyzed in relation to the correction of the systematic behavior, viz. the amount of repeated measurements, the time sequence of the acquired micrographs and the instrument-operator chain. The possibility of applying the method individually to the elements of the micrographs was instead proven not convenient and too onerous for the industry. Eventually, the method was also examined against the framework of the metrological characteristics defined in ISO 25178-600 with hints on possible future developments.


2021 ◽  
Author(s):  
Catriona Silvey ◽  
Zoltan Dienes ◽  
Elizabeth Wonnacott

In psychology, we often want to know whether or not an effect exists. The traditional way of answering this question is to use frequentist statistics. However, a significance test against a null hypothesis of no effect cannot distinguish between two states of affairs: evidence of absence of an effect, and absence of evidence for or against an effect. Bayes factors can make this distinction; however, uptake of Bayes factors in psychology has so far been low for two reasons. Firstly, they require researchers to specify the range of effect sizes their theory predicts. Researchers are often unsure about how to do this, leading to the use of inappropriate default values which may give misleading results. Secondly, many implementations of Bayes factors have a substantial technical learning curve. We present a case study and simulations demonstrating a simple method for generating a range of plausible effect sizes based on the output from frequentist mixed-effects models. Bayes factors calculated using these estimates provide intuitively reasonable results across a range of real effect sizes. The approach provides a solution to the problem of how to come up with principled estimates of effect size, and produces comparable results to a state-of-the-art method without requiring researchers to learn novel statistical software.


2021 ◽  
pp. 165-180
Author(s):  
Timothy E. Essington

The chapter “Bayesian Statistics” gives a brief overview of the Bayesian approach to statistical analysis. It starts off by examining the difference between frequentist statistics and Bayesian statistics. Next, it introduces Bayes’ theorem and explains how the theorem is used in statistics and model selection, with the prosecutor’s fallacy given as a practice example. The chapter then goes on to discuss priors and Bayesian parameter estimation. It concludes with some final thoughts on Bayesian approaches. The chapter does not answer the question “Should ecologists become Bayesian?” However, to the extent that alternative models can be posed as alternative values of parameters, Bayesian parameter estimation can help assign probabilities to those hypotheses.


2021 ◽  
Author(s):  
Guilherme D. Garcia ◽  
Ronaldo Mangueira Lima Jr

Neste artigo, apresentamos os conceitos básicos de uma análise estatística bayesiana e demonstramos como rodar um modelo de regressão utilizando a linguagem R. Ao longo do artigo, comparamos estatística bayesiana e estatística frequentista, destacamos as diferentes vantagens apresentadas por uma abordagem bayesiana, e demonstramos como rodar um modelo simples e visualizar efeitos de interesse. Por fim, sugerimos leituras adicionais aos interessados neste tipo de análise.In this paper, we introduce the basics of Bayesian data analysis and demonstrate how to run a regression model in R using linguistic data. Throughout the paper, we compare Bayesian and Frequentist statistics, highlighting the different advantages of a Bayesian approach. We also show how to run a simple model and how to visualize effects of interest. Finally, we suggest additional readings to those interested in Bayesian analysis more generally.


2021 ◽  
pp. 53-79
Author(s):  
Matt Grossmann

The “science wars” were resolved surprisingly quietly. Throughout the 1980s and 1990s, critics of science from humanities disciplines fought with scientists over the extent to which science is a social and biased process or a path to truth. Today, there are few absolute relativists or adherents of scientific purity and far more acknowledgment that science involves biased truth-seeking. Continuing (but less vicious) wars over Bayesian and frequentist statistics likewise ignore some key agreements: tests of scientific claims require clarifying assumptions and some way to account for confirmation bias, either by building it into the model or by establishing more severe tests for the sufficiency of evidence. This sedation was accompanied by shifts within social science disciplines. Debates over both simplistic models of human nature (especially over rational choice theory) and over what constituted proper quantitative and qualitative methods died down as nearly everyone became theoretically and methodologically pluralist in practice. I herald this evolution, pointing to its benefits in the topics we cover, the ideas we consider, the evidence we generate, and how we evaluate and integrate our knowledge.


2021 ◽  
pp. 104973152110082
Author(s):  
Daniel J. Dunleavy ◽  
Jeffrey R. Lacasse

In this article, we offer a primer on “classical” frequentist statistics. In doing so, we aim to (1) provide social workers with a nuanced overview of common statistical concepts and tools, (2) clarify ways in which these ideas have oft been misused or misinterpreted in research and practice, and (3) help social workers better understand what frequentist statistics can and cannot offer. We begin broadly, starting with foundational issues in the philosophy of statistics. Then, we outline the Fisherian and Neyman–Pearson approaches to statistical inference and the practice of null hypothesis significance testing. We then discuss key statistical concepts including α, power, p values, effect sizes, and confidence intervals, exploring several common misconceptions about their use and interpretation. We close by considering some limitations of frequentist statistics and by offering an opinionated discussion on how social workers may promote more fruitful, responsible, and thoughtful statistical practice.


2021 ◽  
Vol 8 (1) ◽  
pp. 27-44
Author(s):  
Wim Westera

This article presents three empirical studies on the effectiveness of serious games for learning and motivation, while it compares the results arising from Frequentist (classical) Statistics with those from Bayesian Statistics. For a long time it has been technically impracticable to apply Bayesian Statistics and benefit from its conceptual superiority, but the emergence of automated sampling algorithms and user-friendly tools has radically simplified its usage. The three studies include two within-subjects designs and one between-subjects design. Unpaired t-tests, mixed factorial ANOVAs and multiple linear regression are used for the analyses. Overall, the games are found to have clear positive effects on learning and motivation, be it that the results from Bayesian Statistics are more strict and more informative, and possess several conceptual advantages. Accordingly, the paper calls for more emphasis on Bayesian Statistics in serious games research and beyond, as to reduce the present domination by the Frequentist Paradigm.


2020 ◽  
pp. 0193841X2097761
Author(s):  
David Rindskopf

Because of the different philosophy of Bayesian statistics, where parameters are random variables and data are considered fixed, the analysis and presentation of results will differ from that of frequentist statistics. Most importantly, the probabilities that a parameter is in certain regions of the parameter space are crucial quantities in Bayesian statistics that are not calculable (or considered important) in the frequentist approach that is the basis of much of traditional statistics. In this article, I discuss the implications of these differences for presentation of the results of Bayesian analyses. In doing so, I present more detailed guidelines than are usually provided and explain the rationale for my suggestions.


2020 ◽  
Author(s):  
Xenia Schmalz ◽  
José Biurrun Manresa ◽  
Lei Zhang

The use of Bayes Factors is becoming increasingly common in psychological sciences. Thus, it is important that researchers understand the logic behind the Bayes Factor in order to correctly interpret it, and the strengths of weaknesses of the Bayesian approach. As education for psychological scientists focuses on Frequentist statistics, resources are needed for researchers and students who want to learn more about this alternative approach. The aim of the current article is to provide such an overview to a psychological researcher. We cover the general logic behind Bayesian statistics, explain how the Bayes Factor is calculated, how to set the priors in popular software packages, to reflect the prior beliefs of the researcher, and finally provide a set of recommendations and caveats for interpreting Bayes Factors.


2020 ◽  
Vol 3 (3) ◽  
pp. 300-308
Author(s):  
Bence Palfi ◽  
Zoltan Dienes

Psychologists are often interested in whether an independent variable has a different effect in condition A than in condition B. To test such a question, one needs to directly compare the effect of that variable in the two conditions (i.e., test the interaction). Yet many researchers tend to stop when they find a significant test in one condition and a nonsignificant test in the other condition, deeming this as sufficient evidence for a difference between the two conditions. In this Tutorial, we aim to raise awareness of this inferential mistake when Bayes factors are used with conventional cutoffs to draw conclusions. For instance, some researchers might falsely conclude that there must be good-enough evidence for the interaction if they find good-enough Bayesian evidence for the alternative hypothesis, H1, in condition A and good-enough Bayesian evidence for the null hypothesis, H0, in condition B. The case study we introduce highlights that ignoring the test of the interaction can lead to unjustified conclusions and demonstrates that the principle that any assertion about the existence of an interaction necessitates the direct comparison of the conditions is as true for Bayesian as it is for frequentist statistics. We provide an R script of the analyses of the case study and a Shiny app that can be used with a 2 × 2 design to develop intuitions on this issue, and we introduce a rule of thumb with which one can estimate the sample size one might need to have a well-powered design.


Sign in / Sign up

Export Citation Format

Share Document