scholarly journals Erratum

1989 ◽  
Vol 12 (1) ◽  
pp. 187-187
Author(s):  
Robyn M. Dawes

In my comments in BBS (Random generators, ganzfields, analysis, and theory, 1987, 10:581-82) regarding psi, I mistakenly ascribed to Professor Honorton the position that "good experimenters knew in advance that the assertion in the paper I cited (1985), and in fact regards it as a rather foolish one (personal communication 6/25/88). This incorrect assertion was based on my inference - not his - that the most plausible alternative to optional stopping for the negative correlation between sample size and effect size (and even z-scores) was prior knowledge leading to the necessity of sampling fewer observations when the expectation of the estimated effect size was larger.Honorton, C. (1985) The Ganzfeld psi experiment: A critical appraisal. Journal of Parapsychology 49:51-91.

2020 ◽  
pp. 1-12
Author(s):  
Kimberly H. Wood ◽  
Adeel A. Memon ◽  
Raima A. Memon ◽  
Allen Joop ◽  
Jennifer Pilkington ◽  
...  

Background: Cognitive and sleep dysfunction are common non-motor symptoms in Parkinson’s disease (PD). Objective: Determine the relationship between slow wave sleep (SWS) and cognitive performance in PD. Methods: Thirty-two PD participants were evaluated with polysomnography and a comprehensive level II neurocognitive battery, as defined by the Movement Disorders Society Task Force for diagnosis of PD-mild cognitive impairment. Raw scores for each test were transformed into z-scores using normative data. Z-scores were averaged to obtain domain scores, and domain scores were averaged to determine the Composite Cognitive Score (CCS), the primary outcome. Participants were grouped by percent of SWS into High SWS and Low SWS groups and compared on CCS and other outcomes using 2-sided t-tests or Mann-Whitney U. Correlations of cognitive outcomes with sleep architecture and EEG spectral power were performed. Results: Participants in the High SWS group demonstrated better global cognitive function (CCS) (p = 0.01, effect size: r = 0.45). In exploratory analyses, the High SWS group showed better performance in domains of executive function (effect size: Cohen’s d = 1.05), language (d = 0.95), and processing speed (d = 1.12). Percentage of SWS was correlated with global cognition and executive function, language, and processing speed. Frontal EEG delta power during N3 was correlated with the CCS and executive function. Cognition was not correlated with subjective sleep quality. Conclusion: Increased SWS and higher delta spectral power are associated with better cognitive performance in PD. This demonstrates the significant relationship between sleep and cognitive function and suggests that interventions to improve sleep might improve cognition in individuals with PD.


2019 ◽  
Author(s):  
Rob Cribbie ◽  
Nataly Beribisky ◽  
Udi Alter

Many bodies recommend that a sample planning procedure, such as traditional NHST a priori power analysis, is conducted during the planning stages of a study. Power analysis allows the researcher to estimate how many participants are required in order to detect a minimally meaningful effect size at a specific level of power and Type I error rate. However, there are several drawbacks to the procedure that render it “a mess.” Specifically, the identification of the minimally meaningful effect size is often difficult but unavoidable for conducting the procedure properly, the procedure is not precision oriented, and does not guide the researcher to collect as many participants as feasibly possible. In this study, we explore how these three theoretical issues are reflected in applied psychological research in order to better understand whether these issues are concerns in practice. To investigate how power analysis is currently used, this study reviewed the reporting of 443 power analyses in high impact psychology journals in 2016 and 2017. It was found that researchers rarely use the minimally meaningful effect size as a rationale for the chosen effect in a power analysis. Further, precision-based approaches and collecting the maximum sample size feasible are almost never used in tandem with power analyses. In light of these findings, we offer that researchers should focus on tools beyond traditional power analysis when sample planning, such as collecting the maximum sample size feasible.


2021 ◽  
Vol 3 (1) ◽  
pp. 61-89
Author(s):  
Stefan Geiß

Abstract This study uses Monte Carlo simulation techniques to estimate the minimum required levels of intercoder reliability in content analysis data for testing correlational hypotheses, depending on sample size, effect size and coder behavior under uncertainty. The ensuing procedure is analogous to power calculations for experimental designs. In most widespread sample size/effect size settings, the rule-of-thumb that chance-adjusted agreement should be ≥.80 or ≥.667 corresponds to the simulation results, resulting in acceptable α and β error rates. However, this simulation allows making precise power calculations that can consider the specifics of each study’s context, moving beyond one-size-fits-all recommendations. Studies with low sample sizes and/or low expected effect sizes may need coder agreement above .800 to test a hypothesis with sufficient statistical power. In studies with high sample sizes and/or high expected effect sizes, coder agreement below .667 may suffice. Such calculations can help in both evaluating and in designing studies. Particularly in pre-registered research, higher sample sizes may be used to compensate for low expected effect sizes and/or borderline coding reliability (e.g. when constructs are hard to measure). I supply equations, easy-to-use tables and R functions to facilitate use of this framework, along with example code as online appendix.


2005 ◽  
Vol 35 (1) ◽  
pp. 1-20 ◽  
Author(s):  
G. K. Huysamen

Criticisms of traditional null hypothesis significance testing (NHST) became more pronounced during the 1960s and reached a climax during the past decade. Among others, NHST says nothing about the size of the population parameter of interest and its result is influenced by sample size. Estimation of confidence intervals around point estimates of the relevant parameters, model fitting and Bayesian statistics represent some major departures from conventional NHST. Testing non-nil null hypotheses, determining optimal sample size to uncover only substantively meaningful effect sizes and reporting effect-size estimates may be regarded as minor extensions of NHST. Although there seems to be growing support for the estimation of confidence intervals around point estimates of the relevant parameters, it is unlikely that NHST-based procedures will disappear in the near future. In the meantime, it is widely accepted that effect-size estimates should be reported as a mandatory adjunct to conventional NHST results.


2018 ◽  
Vol 52 (4) ◽  
pp. 341-350 ◽  
Author(s):  
Michael FW Festing

Scientists using laboratory animals are under increasing pressure to justify their sample sizes using a “power analysis”. In this paper I review the three methods currently used to determine sample size: “tradition” or “common sense”, the “resource equation” and the “power analysis”. I explain how, using the “KISS” approach, scientists can make a provisional choice of sample size using any method, and then easily estimate the effect size likely to be detectable according to a power analysis. Should they want to be able to detect a smaller effect they can increase their provisional sample size and recalculate the effect size. This is simple, does not need any software and provides justification for the sample size in the terms used in a power analysis.


2020 ◽  
pp. 28-63
Author(s):  
A. G. Vinogradov

The article belongs to a special modern genre of scholar publications, so-called tutorials – articles devoted to the application of the latest methods of design, modeling or analysis in an accessible format in order to disseminate best practices. The article acquaints Ukrainian psychologists with the basics of using the R programming language to the analysis of empirical research data. The article discusses the current state of world psychology in connection with the Crisis of Confidence, which arose due to the low reproducibility of empirical research. This problem is caused by poor quality of psychological measurement tools, insufficient attention to adequate sample planning, typical statistical hypothesis testing practices, and so-called “questionable research practices.” The tutorial demonstrates methods for determining the sample size depending on the expected magnitude of the effect size and desired statistical power, performing basic variable transformations and statistical analysis of psychological research data using language and environment R. The tutorial presents minimal system of R functions required to carry out: modern analysis of reliability of measurement scales, sample size calculation, point and interval estimation of effect size for four the most widespread in psychology designs for the analysis of two variables’ interdependence. These typical problems include finding the differences between the means and variances in two or more samples, correlations between continuous and categorical variables. Practical information on data preparation, import, basic transformations, and application of basic statistical methods in the cloud version of RStudio is provided.


Sign in / Sign up

Export Citation Format

Share Document