New Statistics, Old Schools: An Overview of Current Introductory Undergraduate and Graduate Statistics Pedagogy Practices

2021 ◽  
pp. 009862832110306
Author(s):  
Marc A. Sestir ◽  
Lindsay A. Kennedy ◽  
Jennifer J. Peszka ◽  
Joanna G. Bartley

Background A philosophical shift in statistics regarding emphasis on “New Statistics” (NS; Cumming, G. (2014). The new statistics: Why and how. Psychological Science, 25(1), 7-29.) over conventional null hypothesis significance testing (NHST) raises the question of appropriate material coverage in undergraduate statistics courses. Objective We examined current practices in statistics pedagogy at the graduate and undergraduate levels for both NS and NHST. Method Using an online survey of a nationwide sample of current graduate students ( n = 452) and graduate faculty ( n = 162), we examined statistics pedagogy and perceptions of best approaches for teaching undergraduate statistics. Results In undergraduate statistics courses, coverage of NS material involves modest instruction in effect sizes and confidence intervals, while NHST remains dominant. Graduate courses have more balanced coverage. Effect size estimation was regarded as the most important NS knowledge for success in graduate school and the topic most in need of increased undergraduate coverage. Conclusion Undergraduate statistics courses could increase NS coverage, particularly effect size estimation, to better align with and prepare students for graduate work. Teaching Implications This research summarizes graduate program expectations and graduate student experiences regarding undergraduate statistics that current instructors can use to shape the content of their classes.

2020 ◽  
Author(s):  
Giulia Bertoldo ◽  
Claudio Zandonella Callegher ◽  
Gianmarco Altoè

It is widely appreciated that many studies in psychological science suffer from low statistical power. One of the consequences of analyzing underpowered studies with thresholds of statistical significance, is a high risk of finding exaggerated effect size estimates, in the right or in the wrong direction. These inferential risks can be directly quantified in terms of Type M (magnitude) error and Type S (sign) error, which directly communicate the consequences of design choices on effect size estimation. Given a study design, Type M error is the factor by which a statistically significant effect is on average exaggerated. Type S error is the probability to find a statistically significant result in the opposite direction to the plausible one. Ideally, these errors should be considered during a prospective design analysis in the design phase of a study to determine the appropriate sample size. However, they can also be considered when evaluating studies’ results in a retrospective design analysis. In the present contribution we aim to facilitate the considerations of these errors in the research practice in psychology. For this reason we illustrate how to consider Type M and Type S errors in a design analysis using one of the most common effect size measures in psychology: Pearson correlation coefficient. We provide various examples and make the R functions freely available to enable researchers to perform design analysis for their research projects.


Sign in / Sign up

Export Citation Format

Share Document