scholarly journals Performing High-Powered Studies Efficiently With Sequential Analyses

2017 ◽  
Author(s):  
Daniel Lakens

Running studies with high statistical power, while effect size estimates in psychology are often inaccurate, leads to a practical challenge when designing an experiment. This challenge can be addressed by performing sequential analyses while the data collection is still in progress. At an interim analysis, data collection can be stopped whenever the results are convincing enough to conclude an effect is present, more data can be collected, or the study can be terminated whenever it is extremely unlikely the predicted effect will be observed if data collection would be continued. Such interim analyses can be performed while controlling the Type 1 error rate. Sequential analyses can greatly improve the efficiency with which data is collected. Additional flexibility is provided by adaptive designs where sample sizes are increased based on the observed effect size. The need for pre-registration, ways to prevent experimenter bias, and a comparison between Bayesian approaches and NHST are discussed. Sequential analyses, which are widely used in large scale medical trials, provide an efficient way to perform high-powered informative experiments. I hope this introduction will provide a practical primer that allows researchers to incorporate sequential analyses in their research.

2019 ◽  
Author(s):  
Eduard Klapwijk ◽  
Wouter van den Bos ◽  
Christian K. Tamnes ◽  
Nora Maria Raschle ◽  
Kathryn L. Mills

Many workflows and tools that aim to increase the reproducibility and replicability of research findings have been suggested. In this review, we discuss the opportunities that these efforts offer for the field of developmental cognitive neuroscience, in particular developmental neuroimaging. We focus on issues broadly related to statistical power and to flexibility and transparency in data analyses. Critical considerations relating to statistical power include challenges in recruitment and testing of young populations, how to increase the value of studies with small samples, and the opportunities and challenges related to working with large-scale datasets. Developmental studies involve challenges such as choices about age groupings, lifespan modelling, analyses of longitudinal changes, and data that can be processed and analyzed in a multitude of ways. Flexibility in data acquisition, analyses and description may thereby greatly impact results. We discuss methods for improving transparency in developmental neuroimaging, and how preregistration can improve methodological rigor. While outlining challenges and issues that may arise before, during, and after data collection, solutions and resources are highlighted aiding to overcome some of these. Since the number of useful tools and techniques is ever-growing, we highlight the fact that many practices can be implemented stepwise.


2020 ◽  
Vol 8 (1) ◽  
pp. 25-29 ◽  
Author(s):  
Matthew H. Goldberg ◽  
Sander van der Linden

In a large-scale replication effort, Klein et al. (2018, https://doi.org/10.1177/2515245918810225) investigate the variation in replicability and effect size across many different samples and settings. The authors concluded that, for any given effect being studied, heterogeneity across samples and settings does not explain failures to replicate. In the current commentary, we argue that the heterogeneity observed indeed has implications for replication failures, as well as for statistical power and theory development. We argue that psychological scientific research questions should be contextualized—considering how historical, political, or cultural circumstances might affect study results. We discuss how a perspectivist approach to psychological science is a fruitful way for designing research that aims to explain effect size heterogeneity.


2018 ◽  
Author(s):  
Brice Beffara Bret ◽  
Amélie Beffara Bret ◽  
Ladislas Nalborczyk

Despite many cultural, methodological and technical improvements, one of the major obstacle to results reproducibility remains the pervasive low statistical power. In response to this problem, a lot of attention has recently been drawn to sequential analyses. This type of procedure has been shown to be more efficient (to require less observations and therefore less resources) than classical fixed-N procedures. However, these procedures are submitted to both intrapersonal and interpersonal biases during data collection and data analysis. In this tutorial, we explain how automation can be used to prevent these biases. We show how to synchronise open and free experiment software programs with the Open Science Framework and how to automate sequential data analyses in R. This tutorial is intended to researchers with beginner experience with R but no previous experience with sequential analyses is required.


2021 ◽  
Vol 5 ◽  
Author(s):  
Brice Beffara Bret ◽  
Amélie Beffara Bret ◽  
Ladislas Nalborczyk

Despite many cultural, methodological, and technical improvements, one of the major obstacle to results reproducibility remains the pervasive low statistical power. In response to this problem, a lot of attention has recently been drawn to sequential analyses. This type of procedure has been shown to be more efficient (to require less observations and therefore less resources) than classical fixed-N procedures. However, these procedures are submitted to both intrapersonal and interpersonal biases during data collection and data analysis. In this tutorial, we explain how automation can be used to prevent these biases. We show how to synchronise open and free experiment software programs with the Open Science Framework and how to automate sequential data analyses in R. This tutorial is intended to researchers with beginner experience with R but no previous experience with sequential analyses is required.


2019 ◽  
Author(s):  
Matthew Goldberg ◽  
Sander van der Linden

In a large-scale replication effort, Klein et al., (2018) investigate the variation in replicability and effect size across many different samples and settings. The authors concluded that, for any given effect being studied, heterogeneity across samples and settings does not explain failures to replicate. In the current commentary, we argue that the heterogeneity observed indeed has implications for replication failures, as well as for statistical power and theory development. We argue that psychological scientific research questions should be contextualized—considering how historical, political, or cultural circumstances might affect study results. We discuss how a perspectivist approach to psychological science is a fruitful way for designing research that aims to explain effect size heterogeneity.


2019 ◽  
Vol 227 (4) ◽  
pp. 261-279 ◽  
Author(s):  
Frank Renkewitz ◽  
Melanie Keiner

Abstract. Publication biases and questionable research practices are assumed to be two of the main causes of low replication rates. Both of these problems lead to severely inflated effect size estimates in meta-analyses. Methodologists have proposed a number of statistical tools to detect such bias in meta-analytic results. We present an evaluation of the performance of six of these tools. To assess the Type I error rate and the statistical power of these methods, we simulated a large variety of literatures that differed with regard to true effect size, heterogeneity, number of available primary studies, and sample sizes of these primary studies; furthermore, simulated studies were subjected to different degrees of publication bias. Our results show that across all simulated conditions, no method consistently outperformed the others. Additionally, all methods performed poorly when true effect sizes were heterogeneous or primary studies had a small chance of being published, irrespective of their results. This suggests that in many actual meta-analyses in psychology, bias will remain undiscovered no matter which detection method is used.


2019 ◽  
Author(s):  
Curtis David Von Gunten ◽  
Bruce D Bartholow

A primary psychometric concern with laboratory-based inhibition tasks has been their reliability. However, a reliable measure may not be necessary or sufficient for reliably detecting effects (statistical power). The current study used a bootstrap sampling approach to systematically examine how the number of participants, the number of trials, the magnitude of an effect, and study design (between- vs. within-subject) jointly contribute to power in five commonly used inhibition tasks. The results demonstrate the shortcomings of relying solely on measurement reliability when determining the number of trials to use in an inhibition task: high internal reliability can be accompanied with low power and low reliability can be accompanied with high power. For instance, adding additional trials once sufficient reliability has been reached can result in large gains in power. The dissociation between reliability and power was particularly apparent in between-subject designs where the number of participants contributed greatly to power but little to reliability, and where the number of trials contributed greatly to reliability but only modestly (depending on the task) to power. For between-subject designs, the probability of detecting small-to-medium-sized effects with 150 participants (total) was generally less than 55%. However, effect size was positively associated with number of trials. Thus, researchers have some control over effect size and this needs to be considered when conducting power analyses using analytic methods that take such effect sizes as an argument. Results are discussed in the context of recent claims regarding the role of inhibition tasks in experimental and individual difference designs.


2018 ◽  
Vol 6 (2) ◽  
pp. 147 ◽  
Author(s):  
Ahmad Syahid ◽  
H. Husni

This study aims to analyze the character values ​​in Sirah Nabawiyah Ar-Rahiq Al-Makhtum by Syafiyyurrahman al-Mubarakfuri, so that it is revealed how the character education of the Prophet Muhammad. and analyze its relevance to current educational goals. The research method used is library research, the type of research approach is deductive and the analysis method is content analysis. Data collection techniques used were library techniques. This research succeeded in proving that prophetic values ​​based on the history of the Prophet Muhammad have a strong relevance to the concept of contemporary character education. Prophetic values ​​loaded with character education in this era, especially related to religiosity, honesty, tolerance, discipline, hard work, creative, independent, democracy, curiosity, national spirit, love of the motherland, respect for achievement, friendship, love for peace , likes to read, care about the environment, care about social, and responsibility.


1990 ◽  
Vol 43 (2) ◽  
pp. 301-311 ◽  
Author(s):  
WINNIE Y. YOUNG ◽  
JANIS S. HOUSTON ◽  
JAMES H. HARRIS ◽  
R. GENE HOFFMAN ◽  
LAURESS L. WISE

Sign in / Sign up

Export Citation Format

Share Document