scholarly journals Open Science 2.0: Large-Scale Collaborative Education Research

Author(s):  
Matthew C. Makel ◽  
Kendal N. Smith ◽  
Matthew McBee ◽  
Scott J. Peters ◽  
Erin Miller

Concerns about the replication crisis and false findings have spread through a number of fields, including educational and psychological research. In some pockets, education has begun to adopt open science reforms that have proven useful in other fields. These include preregistration, open materials and data, and registered reports. These reforms are necessary and offer education research a path to increased credibility and social impact. But they all operate at the level of individual researchers’ behavior. In this paper, we discuss models of large-scale collaborative research practices and how they can be applied to educational research. The combination of large-scale collaboration with open and transparent research practices offers education researchers an exciting new method for falsifying theories, verifying what we know, resolving disagreements, and exploring new questions.

AERA Open ◽  
2019 ◽  
Vol 5 (4) ◽  
pp. 233285841989196
Author(s):  
Matthew C. Makel ◽  
Kendal N. Smith ◽  
Matthew T. McBee ◽  
Scott J. Peters ◽  
Erin M. Miller

Concerns about the replication crisis and unreliable findings have spread through several fields, including education and psychological research. In some areas of education, researchers have begun to adopt reforms that have proven useful in other fields. These include preregistration, open materials and data, and registered reports. These reforms offer education research a path toward increased credibility and social impact. In this article, we discuss models of large-scale collaborative research practices and how they can be applied to education research. We discuss five types of large-scale collaboration: participating teams run different studies, multiteam collaboration projects, collaborative analysis, preregistered adversarial collaboration, and persistent collaboration. The combination of large-scale collaboration with open and transparent research practices offers education researchers opportunity to test theories, verify what is known about a topic, resolve disagreements, and explore new questions.


2020 ◽  
Vol 43 (2) ◽  
pp. 91-107
Author(s):  
Matthew C. Makel ◽  
Kendal N. Smith ◽  
Erin M. Miller ◽  
Scott J. Peters ◽  
Matthew T. McBee

Existing research practices in gifted education have many areas for potential improvement so that they can provide useful, generalizable evidence to various stakeholders. In this article, we first review the field’s current research practices and consider the quality and utility of its research findings. Next, we discuss how open science practices increase the transparency of research so readers can more effectively evaluate its validity. Third, we introduce five large-scale collaborative research models that are being used in other fields and discuss how they could be implemented in gifted education research. Finally, we review potential challenges and limitations to implementing collaborative research models in gifted education. We believe greater use of large-scale collaboration will help the field overcome some of its methodological challenges to help provide more precise and accurate information about gifted education.


2021 ◽  
Author(s):  
Bradley David McAuliff ◽  
Melanie B. Fessinger ◽  
Anthony Perillo ◽  
Jennifer Torkildson Perillo

As the field of psychology and law begins to embrace more transparent and accessible science, many questions arise about what open science actually is and how to do it. In this chapter, we contextualize this reform by examining fundamental concerns about psychological research—irreproducibility and replication failures, false-positive errors, and questionable research practices—that threaten its validity and credibility. Next, we turn to psychology’s response by reviewing the concept of open science and explaining how to implement specific practices—preregistration, registered reports, open materials/data/code, and open access publishing—designed to make research more transparent and accessible. We conclude by weighing the implications of open science for the field of psychology and law, specifically with respect to how we conduct and evaluate research, as well as how we train the next generation of psychological scientists and share scientific findings in applied settings.


2022 ◽  
Author(s):  
Bermond Scoggins ◽  
Matthew Peter Robertson

The scientific method is predicated on transparency -- yet the pace at which transparent research practices are being adopted by the scientific community is slow. The replication crisis in psychology showed that published findings employing statistical inference are threatened by undetected errors, data manipulation, and data falsification. To mitigate these problems and bolster research credibility, open data and preregistration have increasingly been adopted in the natural and social sciences. While many political science and international relations journals have committed to implementing these reforms, the extent of open science practices is unknown. We bring large-scale text analysis and machine learning classifiers to bear on the question. Using population-level data -- 93,931 articles across the top 160 political science and IR journals between 2010 and 2021 -- we find that approximately 21% of all statistical inference papers have open data, and 5% of all experiments are preregistered. Despite this shortfall, the example of leading journals in the field shows that change is feasible and can be effected quickly.


2021 ◽  
Author(s):  
Kristina Wiebels ◽  
David Moreau

Containers have become increasingly popular in computing and software engineering, and are gaining traction in scientific research. They allow packaging up all code and dependencies to ensure that analyses run reliably across a range of operating systems and software versions. Despite being a crucial component for reproducible science, containerization has yet to become mainstream in psychology. In this tutorial, we describe the logic behind containers, what they are, and the practical problems they can solve. We walk the reader through the implementation of containerization within a research workflow, with examples using Docker and R. Specifically, we describe how to use existing containers, build personalized containers, and share containers alongside publications. We provide a worked example that includes all steps required to set up a container for a research project and can easily be adapted and extended. We conclude with a discussion of the possibilities afforded by the large-scale adoption of containerization, especially in the context of cumulative, open science, toward a more efficient and inclusive research ecosystem.


2019 ◽  
Author(s):  
Dustin Fife ◽  
Joseph Lee Rodgers

In light of the “replication crisis,” some (e.g., Nelson, Simmons, & Simonsohn, 2018) advocate for greater policing and transparency in research methods. Others (Baumeister, 2016; Finkel, Eastwick, & Reis, 2017; Goldin-meadow, 2016; Levenson, 2017) argue against rigid requirements that may inadvertently restrict discovery. We embrace both positions and argue that proper understanding and implementation of the well-established paradigm of Exploratory Data Analysis (EDA; Tukey, 1977) is necessary to push beyond the replication crisis. Unfortunately, many don’t realize EDA exists (Goldin-Meadow, 2016), fail to understand the philosophy and proper tools for exploration (Baumeister, 2016), or reject EDA as unscientific (Lindsay, 2015). EDA’s mistreatment is unfortunate, and is usually based on misunderstanding the nature and goal of EDA. We develop an expanded typology that situates EDA, CDA, and rough CDA in the same framework with fishing, p-hacking, and HARKing, and argue that most, if not all, questionable research practices (QRPs) would be resolved by understanding and implementing the EDA/CDA gradient. We argue most psychological research is “rough CDA,” which has often and inadvertently used the wrong tools. We conclude with guidelines about how these typologies can be integrated into a cumulative research program that is necessary to move beyond the replication crisis.


2018 ◽  
Author(s):  
Jennifer Bastart ◽  
Richard Anthony Klein ◽  
Hans IJzerman

Replication is one key component towards a robust cumulative knowledge base. It plays a critical function in assessing the stability of the scientific literature. Replication involves closely repeating the procedure of a study and determining if the results are similar to the original. For decades, behavioral scientists were reluctant to publish replications. Reasons were epistemic and pragmatic. First of all, original studies were viewed as conclusive in most cases, and failures to replicate were often attributed to mistakes by the replicating researcher. In addition, failures to replicate may be caused by numerous factors. This inherent ambiguity made replications less desirable to journals. On the other hand, replication successes were expected and considered to contribute little beyond what was already known. Finally, editorial policies did not encourage the publication of replications, leaving the robustness of scientific findings largely unreported. A series of events ultimately led the research community to reconsider replication and research practices at large: the discovery of several cases of large-scale scientific misconduct (i.e., fraud), the invention and/or application of new statistical tools to assess strength of evidence, high-profile publications suggesting that some common practices may be less robust than previously assumed, failure to replicate some major findings of the field, and the creation of new, online tools aimed to promote transparency in the field. To deal with what is often regarded as the crisis of confidence, initiatives have been developed to increase the transparency of research practices, including (but not limited to) pre-registration of studies, effect size predictions and sample size/power estimation, and, of course, replications. Replication projects themselves evolved in quality: From replications that were originally as small in sample as problematically small original studies to large-scale “Many Labs” collaborative projects. Ultimately, the development of higher quality replication projects and open science tools has led (and will continue to lead) to a clearer understanding of human behavior and cognition and have contributed to a clearer distinction between exploratory and confirmatory behavioral science. The current bibliography gives an overview of the history of replications, of the development of tools and guidelines, and of review papers discussing theoretical implications of replications.


2020 ◽  
Author(s):  
Dwight Kravitz ◽  
Stephen Mitroff

Large-scale replication failures have shaken confidence in the social sciences, psychology in particular. Most researchers acknowledge the problem, yet there is widespread debate about the causes and solutions. Using “big data,” the current project demonstrates that unintended consequences of three common questionable research practices (retaining pilot data, adding data after checking for significance, and not publishing null findings) can explain the lion’s share of the replication failures. A massive dataset was randomized to create a true null effect between two conditions, and then these three practices were applied. They produced false discovery rates far greater than 5% (the generally accepted rate), and were strong enough to obscure, or even reverse, the direction of real effects. These demonstrations suggest that much of the replication crisis might be explained by simple, misguided experimental choices. This approach also produces empirically-based corrections to account for these practices when they are unavoidable, providing a viable path forward.


2019 ◽  
Author(s):  
Simon Dennis ◽  
Paul Michael Garrett ◽  
Hyungwook Yim ◽  
Jihun Hamm ◽  
Adam F Osth ◽  
...  

Pervasive internet and sensor technologies promise to revolutionize psychological science. However, the data collected using these technologies is often very personal - indeed the value of the data is often directly related to how personal it is. At the same time, driven by the replication crisis, there is a sustained push to publish data to open repositories. These movements are in fundamental conflict. In this paper, we propose a way to navigate this issue. We argue that there are significant advantages to be gained by ceding the ownership of data to the participants who generate it. Then we provide desiderata for a privacy-preserving platform. In particular, we suggest that researchers should use an interface to perform experiments and run analyses rather than observing the stimuli themselves. We argue that this method not only improves privacy but will also encourage greater compliance with good research practices than is possible with open repositories.


2021 ◽  
Vol 4 (2) ◽  
pp. 251524592110178
Author(s):  
Kristina Wiebels ◽  
David Moreau

Containers have become increasingly popular in computing and software engineering and are gaining traction in scientific research. They allow packaging up all code and dependencies to ensure that analyses run reliably across a range of operating systems and software versions. Despite being a crucial component for reproducible science, containerization has yet to become mainstream in psychology. In this tutorial, we describe the logic behind containers, what they are, and the practical problems they can solve. We walk the reader through the implementation of containerization within a research workflow with examples using Docker and R. Specifically, we describe how to use existing containers, build personalized containers, and share containers alongside publications. We provide a worked example that includes all steps required to set up a container for a research project and can easily be adapted and extended. We conclude with a discussion of the possibilities afforded by the large-scale adoption of containerization, especially in the context of cumulative, open science, toward a more efficient and inclusive research ecosystem.


Sign in / Sign up

Export Citation Format

Share Document