Collaboration in Giftedness and Talent Development Research

2020 ◽  
Vol 43 (2) ◽  
pp. 91-107
Author(s):  
Matthew C. Makel ◽  
Kendal N. Smith ◽  
Erin M. Miller ◽  
Scott J. Peters ◽  
Matthew T. McBee

Existing research practices in gifted education have many areas for potential improvement so that they can provide useful, generalizable evidence to various stakeholders. In this article, we first review the field’s current research practices and consider the quality and utility of its research findings. Next, we discuss how open science practices increase the transparency of research so readers can more effectively evaluate its validity. Third, we introduce five large-scale collaborative research models that are being used in other fields and discuss how they could be implemented in gifted education research. Finally, we review potential challenges and limitations to implementing collaborative research models in gifted education. We believe greater use of large-scale collaboration will help the field overcome some of its methodological challenges to help provide more precise and accurate information about gifted education.

2019 ◽  
Author(s):  
Kendal N. Smith ◽  
Matthew C. Makel

In response to concerns about the credibility of many published research findings, open science reforms such as preregistration, data sharing, and alternative forms of publication are being increasingly adopted across scientific communities. Although journals in giftedness and advanced academics research have already implemented several of these practices, they remain unfamiliar to some researchers. In this informal conversation, Kendal Smith and Matthew Makel discuss how they came to know and use open science practices; open science values; benefits and objections; and their future aspirations for open science practices in gifted education research. Their conversation aims to help make open science practices more understandable and actionable for both early career and established researchers.


2019 ◽  
Vol 30 (2) ◽  
pp. 111-123
Author(s):  
Kendal N. Smith ◽  
Matthew C. Makel

In response to concerns about the credibility of many published research findings, open science reforms such as preregistration, data sharing, and alternative forms of publication are being increasingly adopted across scientific communities. Although journals on giftedness and advanced academic research have already implemented several of these practices, they remain unfamiliar to some researchers. In this informal conversation, Kendal Smith and Matthew Makel discuss how they came to know and use open science practices, the values of open science, benefits and objections, and their future aspirations for open science practices in gifted education research. Their conversation aims to help make open science practices more understandable and actionable for both early career and established researchers.


2019 ◽  
Author(s):  
Matthew C. Makel ◽  
Kendal N. Smith ◽  
Erin Morris Miller ◽  
Scott J. Peters ◽  
Matthew McBee

Students have numerous opportunities to learn outside the classroom. However, with great choice comes great variability of both quality and of intent. To evaluate the effectiveness of out-of-school programs generally—as well as individual programs specifically—we must know their intended effects (program goals) as well as their actual effects (program outcomes). Despite numerous existing evaluations and even more numerous claims, many programs and evaluations lack strong empirical support, suffer from biases, and are ripe for perverse incentives. In this paper, we propose greater adoption of large-scale collaborative research to provide more precise and accurate information about: effects of participation; heterogeneity of effects across contexts or demographic groups; resolve disagreements; and evolve to exploring new questions with greater confidence in previous findings. Implementing large-scale collaborative research practices that have helped catalyze a credibility revolution in psychology will help stakeholders make more informed decisions.


2019 ◽  
Author(s):  
Matthew C. Makel ◽  
Kendal N. Smith ◽  
Matthew McBee ◽  
Scott J. Peters ◽  
Erin Miller

Concerns about the replication crisis and false findings have spread through a number of fields, including educational and psychological research. In some pockets, education has begun to adopt open science reforms that have proven useful in other fields. These include preregistration, open materials and data, and registered reports. These reforms are necessary and offer education research a path to increased credibility and social impact. But they all operate at the level of individual researchers’ behavior. In this paper, we discuss models of large-scale collaborative research practices and how they can be applied to educational research. The combination of large-scale collaboration with open and transparent research practices offers education researchers an exciting new method for falsifying theories, verifying what we know, resolving disagreements, and exploring new questions.


2021 ◽  
Author(s):  
Jesse Fox ◽  
Katy E Pearce ◽  
Adrienne L Massanari ◽  
Julius Matthew Riles ◽  
Łukasz Szulc ◽  
...  

Abstract The open science (OS) movement has advocated for increased transparency in certain aspects of research. Communication is taking its first steps toward OS as some journals have adopted OS guidelines codified by another discipline. We find this pursuit troubling as OS prioritizes openness while insufficiently addressing essential ethical principles: respect for persons, beneficence, and justice. Some recommended open science practices increase the potential for harm for marginalized participants, communities, and researchers. We elaborate how OS can serve a marginalizing force within academia and the research community, as it overlooks the needs of marginalized scholars and excludes some forms of scholarship. We challenge the current instantiation of OS and propose a divergent agenda for the future of Communication research centered on ethical, inclusive research practices.


2021 ◽  
Author(s):  
Kathryn R. Wentzel

In this article, I comment on the potential benefits and limitations of open science reforms for improving the transparency and accountability of research, and enhancing the credibility of research findings within communities of policy and practice. Specifically, I discuss the role of replication and reproducibility of research in promoting better quality studies, the identification of generalizable principles, and relevance for practitioners and policymakers. Second, I suggest that greater attention to theory might contribute to the impact of open science practices, and discuss ways in which theory has implications for sampling, measurement and research design. Ambiguities concerning the aims of preregistration and registered reports also are highlighted. In conclusion, I discuss structural roadblocks to open science reform and reflect on the relevance of these reforms for educational psychology.


2021 ◽  
Author(s):  
Robert Duiveman

Abstract Cities are turning to urban living labs and research consortia to co-create knowledge that can better enable them to address pervasive policy problems. Collaborations within such practices help researchers, officials and local stakeholders find new ways of dealing with urban issues and developing new relations with one another. Interestingly, success in the latter is often closely related to accomplishing the former. Besides of analysing this phenomenon in terms of learning—as is common—this paper also delves into the power dynamics involved in collaborative knowledge development. This perspective contributes to a better understanding of how puzzling and powering are simultaneously involved in making research relevant to policy-making. By presenting two collaborative research consortia in the Netherlands, we demonstrate how developing knowledge involves both re-structuring problems and the urban practices involved in governing such problems. Collaborative research practices are predominantly concerned with learning as long as restructuring the problem leads to research findings that are meaningful to all actors. Power becomes manifest when one actor insists on restructuring (often reproducing) problems in a manner judged unacceptable by others. Analysis of two case studies will show how the familiar three faces of power express themselves in collaborative knowledge development. It is recommended that these new practices also require methods for better orchestrating power besides a methodology for successful structuring learning through collaborative research practices.


2021 ◽  
Author(s):  
Robert Schulz ◽  
Georg Langen ◽  
Robert Prill ◽  
Michael Cassel ◽  
Tracey Weissgerber

Introduction: While transparent reporting of clinical trials is essential to assess the risk of bias and translate research findings into clinical practice, earlier studies have shown that deficiencies are common. This study examined current clinical trial reporting and transparent research practices in sports medicine and orthopedics. Methods: The sample included clinical trials published in the top 25% of sports medicine and orthopedics journals over eight months. Two independent reviewers assessed pre-registration, open data and criteria related to scientific rigor, the study sample, and data analysis. Results: The sample included 163 clinical trials from 27 journals. While the majority of trials mentioned rigor criteria, essential details were often missing. Sixty percent (confidence interval [CI] 53-68%) of trials reported sample size calculations, but only 32% (CI 25-39%) justified the expected effect size. Few trials indicated the blinding status of all main stakeholders (4%; CI 1-7%). Only 18% (CI 12-24%) included information on randomization type, method, and concealed allocation. Most trials reported participants' sex/gender (95%; CI 92-98%) and information on inclusion and exclusion criteria (78%; CI 72-84%). Only 20% (CI 14-26%) of trials were pre-registered. No trials deposited data in open repositories. Conclusions: These results will aid the sports medicine and orthopedics community in developing tailored interventions to improve reporting. While authors typically mention blinding, randomization and other factors, essential details are often missing. Greater acceptance of open science practices, like pre-registration and open data, is needed. These practices have been widely encouraged, we discuss systemic interventions that may improve clinical trial reporting. Registration: https://doi.org/10.17605/OSF.IO/9648H


2022 ◽  
Author(s):  
Bermond Scoggins ◽  
Matthew Peter Robertson

The scientific method is predicated on transparency -- yet the pace at which transparent research practices are being adopted by the scientific community is slow. The replication crisis in psychology showed that published findings employing statistical inference are threatened by undetected errors, data manipulation, and data falsification. To mitigate these problems and bolster research credibility, open data and preregistration have increasingly been adopted in the natural and social sciences. While many political science and international relations journals have committed to implementing these reforms, the extent of open science practices is unknown. We bring large-scale text analysis and machine learning classifiers to bear on the question. Using population-level data -- 93,931 articles across the top 160 political science and IR journals between 2010 and 2021 -- we find that approximately 21% of all statistical inference papers have open data, and 5% of all experiments are preregistered. Despite this shortfall, the example of leading journals in the field shows that change is feasible and can be effected quickly.


2019 ◽  
Vol 6 (12) ◽  
pp. 190738 ◽  
Author(s):  
Jerome Olsen ◽  
Johanna Mosen ◽  
Martin Voracek ◽  
Erich Kirchler

The replicability of research findings has recently been disputed across multiple scientific disciplines. In constructive reaction, the research culture in psychology is facing fundamental changes, but investigations of research practices that led to these improvements have almost exclusively focused on academic researchers. By contrast, we investigated the statistical reporting quality and selected indicators of questionable research practices (QRPs) in psychology students' master's theses. In a total of 250 theses, we investigated utilization and magnitude of standardized effect sizes, along with statistical power, the consistency and completeness of reported results, and possible indications of p -hacking and further testing. Effect sizes were reported for 36% of focal tests (median r = 0.19), and only a single formal power analysis was reported for sample size determination (median observed power 1 − β = 0.67). Statcheck revealed inconsistent p -values in 18% of cases, while 2% led to decision errors. There were no clear indications of p -hacking or further testing. We discuss our findings in the light of promoting open science standards in teaching and student supervision.


Sign in / Sign up

Export Citation Format

Share Document