scholarly journals A Path to Greater Credibility: Large-Scale Collaborative Education Research

AERA Open ◽  
2019 ◽  
Vol 5 (4) ◽  
pp. 233285841989196
Author(s):  
Matthew C. Makel ◽  
Kendal N. Smith ◽  
Matthew T. McBee ◽  
Scott J. Peters ◽  
Erin M. Miller

Concerns about the replication crisis and unreliable findings have spread through several fields, including education and psychological research. In some areas of education, researchers have begun to adopt reforms that have proven useful in other fields. These include preregistration, open materials and data, and registered reports. These reforms offer education research a path toward increased credibility and social impact. In this article, we discuss models of large-scale collaborative research practices and how they can be applied to education research. We discuss five types of large-scale collaboration: participating teams run different studies, multiteam collaboration projects, collaborative analysis, preregistered adversarial collaboration, and persistent collaboration. The combination of large-scale collaboration with open and transparent research practices offers education researchers opportunity to test theories, verify what is known about a topic, resolve disagreements, and explore new questions.

2019 ◽  
Author(s):  
Matthew C. Makel ◽  
Kendal N. Smith ◽  
Matthew McBee ◽  
Scott J. Peters ◽  
Erin Miller

Concerns about the replication crisis and false findings have spread through a number of fields, including educational and psychological research. In some pockets, education has begun to adopt open science reforms that have proven useful in other fields. These include preregistration, open materials and data, and registered reports. These reforms are necessary and offer education research a path to increased credibility and social impact. But they all operate at the level of individual researchers’ behavior. In this paper, we discuss models of large-scale collaborative research practices and how they can be applied to educational research. The combination of large-scale collaboration with open and transparent research practices offers education researchers an exciting new method for falsifying theories, verifying what we know, resolving disagreements, and exploring new questions.


2020 ◽  
Vol 43 (2) ◽  
pp. 91-107
Author(s):  
Matthew C. Makel ◽  
Kendal N. Smith ◽  
Erin M. Miller ◽  
Scott J. Peters ◽  
Matthew T. McBee

Existing research practices in gifted education have many areas for potential improvement so that they can provide useful, generalizable evidence to various stakeholders. In this article, we first review the field’s current research practices and consider the quality and utility of its research findings. Next, we discuss how open science practices increase the transparency of research so readers can more effectively evaluate its validity. Third, we introduce five large-scale collaborative research models that are being used in other fields and discuss how they could be implemented in gifted education research. Finally, we review potential challenges and limitations to implementing collaborative research models in gifted education. We believe greater use of large-scale collaboration will help the field overcome some of its methodological challenges to help provide more precise and accurate information about gifted education.


2019 ◽  
Author(s):  
Dustin Fife ◽  
Joseph Lee Rodgers

In light of the “replication crisis,” some (e.g., Nelson, Simmons, & Simonsohn, 2018) advocate for greater policing and transparency in research methods. Others (Baumeister, 2016; Finkel, Eastwick, & Reis, 2017; Goldin-meadow, 2016; Levenson, 2017) argue against rigid requirements that may inadvertently restrict discovery. We embrace both positions and argue that proper understanding and implementation of the well-established paradigm of Exploratory Data Analysis (EDA; Tukey, 1977) is necessary to push beyond the replication crisis. Unfortunately, many don’t realize EDA exists (Goldin-Meadow, 2016), fail to understand the philosophy and proper tools for exploration (Baumeister, 2016), or reject EDA as unscientific (Lindsay, 2015). EDA’s mistreatment is unfortunate, and is usually based on misunderstanding the nature and goal of EDA. We develop an expanded typology that situates EDA, CDA, and rough CDA in the same framework with fishing, p-hacking, and HARKing, and argue that most, if not all, questionable research practices (QRPs) would be resolved by understanding and implementing the EDA/CDA gradient. We argue most psychological research is “rough CDA,” which has often and inadvertently used the wrong tools. We conclude with guidelines about how these typologies can be integrated into a cumulative research program that is necessary to move beyond the replication crisis.


2020 ◽  
Author(s):  
Dwight Kravitz ◽  
Stephen Mitroff

Large-scale replication failures have shaken confidence in the social sciences, psychology in particular. Most researchers acknowledge the problem, yet there is widespread debate about the causes and solutions. Using “big data,” the current project demonstrates that unintended consequences of three common questionable research practices (retaining pilot data, adding data after checking for significance, and not publishing null findings) can explain the lion’s share of the replication failures. A massive dataset was randomized to create a true null effect between two conditions, and then these three practices were applied. They produced false discovery rates far greater than 5% (the generally accepted rate), and were strong enough to obscure, or even reverse, the direction of real effects. These demonstrations suggest that much of the replication crisis might be explained by simple, misguided experimental choices. This approach also produces empirically-based corrections to account for these practices when they are unavoidable, providing a viable path forward.


2021 ◽  
Author(s):  
Bradley David McAuliff ◽  
Melanie B. Fessinger ◽  
Anthony Perillo ◽  
Jennifer Torkildson Perillo

As the field of psychology and law begins to embrace more transparent and accessible science, many questions arise about what open science actually is and how to do it. In this chapter, we contextualize this reform by examining fundamental concerns about psychological research—irreproducibility and replication failures, false-positive errors, and questionable research practices—that threaten its validity and credibility. Next, we turn to psychology’s response by reviewing the concept of open science and explaining how to implement specific practices—preregistration, registered reports, open materials/data/code, and open access publishing—designed to make research more transparent and accessible. We conclude by weighing the implications of open science for the field of psychology and law, specifically with respect to how we conduct and evaluate research, as well as how we train the next generation of psychological scientists and share scientific findings in applied settings.


2018 ◽  
Vol 68 (12) ◽  
pp. 2857-2859
Author(s):  
Cristina Mihaela Ghiciuc ◽  
Andreea Silvana Szalontay ◽  
Luminita Radulescu ◽  
Sebastian Cozma ◽  
Catalina Elena Lupusoru ◽  
...  

There is an increasing interest in the analysis of salivary biomarkers for medical practice. The objective of this article was to identify the specificity and sensitivity of quantification methods used in biosensors or portable devices for the determination of salivary cortisol and salivary a-amylase. There are no biosensors and portable devices for salivary amylase and cortisol that are used on a large scale in clinical studies. These devices would be useful in assessing more real-time psychological research in the future.


2021 ◽  
pp. 001316442110086
Author(s):  
Tenko Raykov ◽  
Natalja Menold ◽  
Jane Leer

Two- and three-level designs in educational and psychological research can involve entire populations of Level-3 and possibly Level-2 units, such as schools and educational districts nested within a given state, or neighborhoods and counties in a state. Such a design is of increasing relevance in empirical research owing to the growing popularity of large-scale studies in these and cognate disciplines. The present note discusses a readily applicable procedure for point-and-interval estimation of the proportions of second- and third-level variances in such multilevel settings, which may also be employed in model choice considerations regarding ensuing analyses for response variables of interest. The method is developed within the framework of the latent variable modeling methodology, is readily utilized with widely used software, and is illustrated with an example.


Author(s):  
Sean Brantley ◽  
Michael Wilkinson ◽  
Jing Feng

This study investigates placebos and video games’ usefulness as psychological research tools. One proposed underlying mechanism of the placebo effect is participants’ expectations. Such expectation effects exist in sports psychology and healthcare domains, but inconsistent findings have emerged on whether similar effects impact a participants’ cognitive performance. Concurrently, using video games as task environments is an emerging methodology relating to expertise and large-scale behavioral data collection. Therefore, this study examines the expectancy effect induced by researcher instructions on in-game performance. The instructional expectancy condition for this study is in-game successes framed using emoting (e.g., emoting under the pretense of subsequent performance increases) versus a control group. Preliminary results showed no evidence of different in-game performance between expectancy conditions. Potential mechanisms that could have led to a lack of effect were discussed.


2021 ◽  
Author(s):  
Robert Duiveman

Abstract Cities are turning to urban living labs and research consortia to co-create knowledge that can better enable them to address pervasive policy problems. Collaborations within such practices help researchers, officials and local stakeholders find new ways of dealing with urban issues and developing new relations with one another. Interestingly, success in the latter is often closely related to accomplishing the former. Besides of analysing this phenomenon in terms of learning—as is common—this paper also delves into the power dynamics involved in collaborative knowledge development. This perspective contributes to a better understanding of how puzzling and powering are simultaneously involved in making research relevant to policy-making. By presenting two collaborative research consortia in the Netherlands, we demonstrate how developing knowledge involves both re-structuring problems and the urban practices involved in governing such problems. Collaborative research practices are predominantly concerned with learning as long as restructuring the problem leads to research findings that are meaningful to all actors. Power becomes manifest when one actor insists on restructuring (often reproducing) problems in a manner judged unacceptable by others. Analysis of two case studies will show how the familiar three faces of power express themselves in collaborative knowledge development. It is recommended that these new practices also require methods for better orchestrating power besides a methodology for successful structuring learning through collaborative research practices.


Sign in / Sign up

Export Citation Format

Share Document