Holding replication studies to mainstream standards of evidence

2018 ◽  
Vol 41 ◽  
Author(s):  
Duane T. Wegener ◽  
Leandre R. Fabrigar

AbstractReplications can make theoretical contributions, but are unlikely to do so if their findings are open to multiple interpretations (especially violations of psychometric invariance). Thus, just as studies demonstrating novel effects are often expected to empirically evaluate competing explanations, replications should be held to similar standards. Unfortunately, this is rarely done, thereby undermining the value of replication research.

2019 ◽  
Vol 54 (1) ◽  
pp. 29-39
Author(s):  
John L. Luckner ◽  
Rashida Banerjee ◽  
Sara Movahedazarhouligh ◽  
Kaitlyn Millen

Current federal legislation emphasizes the use of programs, interventions, strategies, and activities that have been demonstrated through research to be effective. One way to increase the quantity and quality of research that guides practice is to conduct replication research. The purpose of this study was to undertake a systematic review of the replication research focused on self-determination conducted between 2007 and 2017. Using methods used by Cook and colleagues, we identified 80 intervention studies on topics related to self-determination, of which 31 were coded as replications. Intervention study trends, rate of replication studies, percentage of agreements between findings of original and replication studies, amount of author overlap, and types of research designs used are reported along with recommendations for future research.


2017 ◽  
Vol 28 (2) ◽  
pp. 101-119 ◽  
Author(s):  
Scott J. Peters ◽  
Nielsen Pereira

Even as the importance of replication research has become more widely understood, the field of gifted education is almost completely devoid of replication studies. An area in which replication is a particular problem is in student identification research, since instrument validity is a necessary prerequisite for any sound psychometric decision. To begin to address this issue, our study sought to replicate the internal validity structure of three teacher rating instruments. The goal was to determine whether data gathered using these instruments fit their published internal validity structures. Results indicated all three instruments failed to meet traditional fit criteria, but to varying degrees, and that further replication or instrument revision are needed before these instruments can be used with confidence.


2000 ◽  
Vol 67 (3) ◽  
pp. 155-161 ◽  
Author(s):  
Jennifer Greenwood Klein ◽  
G. Ted Brown ◽  
Mary Lysyk

It is common for researchers to request at the end of their published studies, the urgency for further studies to be completed. Unfortunately there are very few published studies that have replicated original studies. The purpose of this article is to provide a framework for understanding issues related to replication research that will assist occupational therapy researchers, clinicians, managers, students and educators to realize the importance of implementing and publishing replication research to establish evidence-based practice. Various areas related to replication research are explored. In addition, a computerized literature search using the search term ‘replication’ was completed. Only four articles published between 1982-1998 were discovered. This article concludes with recommendations to ensure replication studies are included in the occupational therapy literature and utilized in clinical practice.


2016 ◽  
Vol 37 (4) ◽  
pp. 235-243 ◽  
Author(s):  
William J. Therrien ◽  
Hannah M. Mathews ◽  
Shanna Eisner Hirsch ◽  
Michael Solis

Despite the importance of replication for building an evidence base, there has been no formal examination to date of replication research in special education. In this review, we examined the extent and nature of replication of intervention research in special education using an “article progeny” approach and a three-pronged definition of replication (direct, conceptual, intervention overlap). In this approach, original articles (i.e., parent studies) were selected via a stratified, random sampling procedure. Next, we examined all articles that referenced the parent articles (i.e., child studies) to determine the extent and nature of the replication of the original studies. Seventy-five percent of the parent studies were replicated by at least one child study. Across all parent studies, there were 39 replication child studies. Although there was a high overall replication rate, there were a limited number of conceptual replications, and no direct replication studies were identified.


2021 ◽  
Author(s):  
Suzanne Hoogeveen ◽  
Michiel van Elk

The Cognitive Science of Religion (CSR) is a relatively young but prolific field that has offered compelling insights into religious minds and practices. However, many empirical findings within this field are still preliminary and their reliability remains to be determined. In this paper, we first argue that it is crucial to critically evaluate the CSR literature and adopt open science practices and replication research in particular to move the field forward. Second, we highlight the outcomes of previous replications and make suggestions for future replication studies in the CSR, with a particular focus on neuroscience, developmental psychology, and qualitative research. Finally, we provide a ‘replication script’ with advice on how to select, conduct, and organize replication research. Our approach is illustrated with a ‘glimpse behind the scenes’ of the recently launched Cross-Cultural Religious Replication Project, in the hope of inspiring scholars of religion to embrace open science and replication in their own research.


2018 ◽  
Vol 13 (4) ◽  
pp. 411-417 ◽  
Author(s):  
Simine Vazire

The credibility revolution (sometimes referred to as the “replicability crisis”) in psychology has brought about many changes in the standards by which psychological science is evaluated. These changes include (a) greater emphasis on transparency and openness, (b) a move toward preregistration of research, (c) more direct-replication studies, and (d) higher standards for the quality and quantity of evidence needed to make strong scientific claims. What are the implications of these changes for productivity, creativity, and progress in psychological science? These questions can and should be studied empirically, and I present my predictions here. The productivity of individual researchers is likely to decline, although some changes (e.g., greater collaboration, data sharing) may mitigate this effect. The effects of these changes on creativity are likely to be mixed: Researchers will be less likely to pursue risky questions; more likely to use a broad range of methods, designs, and populations; and less free to define their own best practices and standards of evidence. Finally, the rate of scientific progress—the most important shared goal of scientists—is likely to increase as a result of these changes, although one’s subjective experience of making progress will likely become rarer.


2021 ◽  
Author(s):  
Jessica Kay Flake

An increased focus on transparency and replication in science has stimulated reform in research practices and dissemination. As a result, the research culture is changing: the use of preregistration is on the rise, access to data and materials is increasing, and large-scale replication studies are more common. In this paper, I discuss two problems the methodological reform movement is now ready to tackle given the progress thus far and how educational psychology is particularly well suited to contribute. The first problem is that there is a lack of transparency and rigor in measurement development and use. The second problem is caused by the first; replication research is difficult and potentially futile as long as the first problem persists. I describe how to expand transparent practices into measure use and how construct validation can be implemented to bolster the validity of replication studies.


2021 ◽  
Author(s):  
Jessica Kay Flake ◽  
Mairead Shaw ◽  
Raymond Luong

Yarkoni describes a grim state of psychological science in which the gross misspecification of our models and specificity of our operationalizations produce claims with generality so narrow that no one would be interested in them. We consider this a generalizability of construct validity issue and discuss how construct validation research should precede large-scale replication research. We provide ideas for a path forward by suggesting psychologists take a few steps back. By retooling large-scale replication studies, psychologists can execute the descriptive research needed to assess the generalizability of constructs. We provide examples of reusing large-scale replication data to conduct construct validation research post hoc. We also discuss proof of concept research that is on-going at the Psychological Science Accelerator. Big team psychology makes large-scale construct validity and generalizability research feasible and worthwhile. We assert that no one needs to quit the field, in fact, there is plenty of work to do. The optimistic interpretation is that if psychologists focus less on generating new ideas and more on organizing, synthesizing, measuring, and assessing constructs from existing ideas, we can keep busy for at least 100 years.


2018 ◽  
Vol 49 (1) ◽  
pp. 111-115 ◽  
Author(s):  
Faiza M. Jamil

I appreciate the opportunity to respond to the thoughtful comments made by Alan Schoenfeld (2018) and Jon Star (2018) in their commentaries on replication studies in this issue of JRME, including their comments on our study of teacher expectancy effects (Jamil, Larsen, & Hamre, 2018). I have decided to write this rejoinder in the form of a personal reflection. As academics, we carry the tremendous burden of expertise, and perhaps that is partly why, as pointed out by Schoenfeld (2018), the academic reward system focuses so heavily on novelty and innovation. With our expertise, we are supposed to have all the answers, solve all the problems, and do so in brilliant, new ways. Replication studies are undervalued because they not only, by definition, recreate past research but, perhaps, also bring into question another scholar‧s expertise. Star (2018) even states that one of the three criteria of an outstanding replication study is that it “convincingly shows that there is reason to believe that the results of the original study may be flawed” (p. 99). Although this rigorous examination is precisely the way to build trust in the quality of our findings and move the field forward, it is also what makes it challenging to have candid conversations about what we do not know.


2016 ◽  
Vol 37 (4) ◽  
pp. 223-234 ◽  
Author(s):  
Bryan G. Cook ◽  
Lauren W. Collins ◽  
Sara C. Cook ◽  
Lysandra Cook

Replication research is essential to scientific knowledge. Reviews of replication studies often electronically search for replicat* as a textword, which does not identify studies that replicate previous research but do not self-identify as such. We examined whether the 83 intervention studies published in six non-categorical research journals in special education in 2013 and 2014 might be considered replications regardless of using the term replicat* by applying criteria related to (a) the stated purpose of the study and (b) comparing the findings of the study with the results of previous studies. We coded 26 intervention studies as replications. Authors of 17 of these studies reported that their findings solely agreed with the results of the original study(ies). Author overlap occurred for half of the replicative studies. The likelihood of findings being reproduced did not vary as a function of author overlap. We discuss implications and recommendations based on these findings.


Sign in / Sign up

Export Citation Format

Share Document