scholarly journals The devil is in the details: Reporting and transparent research practices in sports medicine and orthopedic clinical trials

Author(s):  
Robert Schulz ◽  
Georg Langen ◽  
Robert Prill ◽  
Michael Cassel ◽  
Tracey Weissgerber

Introduction: While transparent reporting of clinical trials is essential to assess the risk of bias and translate research findings into clinical practice, earlier studies have shown that deficiencies are common. This study examined current clinical trial reporting and transparent research practices in sports medicine and orthopedics. Methods: The sample included clinical trials published in the top 25% of sports medicine and orthopedics journals over eight months. Two independent reviewers assessed pre-registration, open data and criteria related to scientific rigor, the study sample, and data analysis. Results: The sample included 163 clinical trials from 27 journals. While the majority of trials mentioned rigor criteria, essential details were often missing. Sixty percent (confidence interval [CI] 53-68%) of trials reported sample size calculations, but only 32% (CI 25-39%) justified the expected effect size. Few trials indicated the blinding status of all main stakeholders (4%; CI 1-7%). Only 18% (CI 12-24%) included information on randomization type, method, and concealed allocation. Most trials reported participants' sex/gender (95%; CI 92-98%) and information on inclusion and exclusion criteria (78%; CI 72-84%). Only 20% (CI 14-26%) of trials were pre-registered. No trials deposited data in open repositories. Conclusions: These results will aid the sports medicine and orthopedics community in developing tailored interventions to improve reporting. While authors typically mention blinding, randomization and other factors, essential details are often missing. Greater acceptance of open science practices, like pre-registration and open data, is needed. These practices have been widely encouraged, we discuss systemic interventions that may improve clinical trial reporting. Registration: https://doi.org/10.17605/OSF.IO/9648H

2021 ◽  
Vol 39 (15_suppl) ◽  
pp. 1082-1082
Author(s):  
Kinisha Gala ◽  
Ankit Kalucha ◽  
Samuel Martinet ◽  
Anushri Goel ◽  
Kalpana Devi Narisetty ◽  
...  

1082 Background: Primary endpoints of clinical trials frequently include subgroup-analyses. Several solid cancers such as aTNBC are heterogeneous, which can lead to unpredictable control arm performance impairing accurate assumptions for sample size calculations. We explore the value of a comprehensive clinical trial results repository in assessing control arm heterogeneity with aTNBC as the pilot. Methods: We identified P2/3 trials reporting median overall survival (mOS) and/or median progression-free survival (mPFS) in unselected aTNBC through a systematic search of PubMed, clinical trials databases and conference proceedings. Trial arms with sample sizes ≤25 or evaluating drugs no longer in development were excluded. Due to inconsistency among PD-L1 assays, PD-L1 subgroup analyses were not assessed separately. The primary aim was a descriptive analysis of control arm mOS and mPFS across all randomized trials in first line (1L) aTNBC. Secondary aims were to investigate time-to-event outcomes in control arms in later lines and to assess time-trends in aTNBC experimental and control arm outcomes. Results: We included 33 trials published between June 2013-Feb 2021. The mOS of control arms in 1L was 18.7mo (range 12.6-22.8) across 5 trials with single agent (nab-) paclitaxel [(n)P], and 18.1mo (similar range) for 7 trials including combination regimens (Table). The mPFS of control arms in 1L was 4.9mo (range 3.8-5.6) across 5 trials with single-agent (n)P, and 5.6mo (range 3.8-6.1) across 8 trials including combination regimens. Control arm mOS was 13.1mo (range 9.4-17.4) for 3 trials in first and second line (1/2L) and 8.7mo (range 6.7-10.8) across 5 trials in 2L and beyond. R2 for the mOS best-fit lines across control and experimental arms over time was 0.09, 0.01 and 0.04 for 1L, 1/2L and 2L and beyond, respectively. Conclusions: Median time-to-event outcomes of control arms in 1L aTNBC show considerable heterogeneity, even among trials with comparable regimens and large sample sizes. Disregarding important prognostic factors at stratification can lead to imbalances between arms, which may jeopardize accurate sample size calculations, trial results and interpretation. Optimizing stratification and assumptions for power calculations is of utmost importance in aTNBC and beyond. A digitized trial results repository with precisely defined patient populations and treatment settings could improve accuracy of assumptions during clinical trial design.[Table: see text]


2020 ◽  
Vol 43 (2) ◽  
pp. 91-107
Author(s):  
Matthew C. Makel ◽  
Kendal N. Smith ◽  
Erin M. Miller ◽  
Scott J. Peters ◽  
Matthew T. McBee

Existing research practices in gifted education have many areas for potential improvement so that they can provide useful, generalizable evidence to various stakeholders. In this article, we first review the field’s current research practices and consider the quality and utility of its research findings. Next, we discuss how open science practices increase the transparency of research so readers can more effectively evaluate its validity. Third, we introduce five large-scale collaborative research models that are being used in other fields and discuss how they could be implemented in gifted education research. Finally, we review potential challenges and limitations to implementing collaborative research models in gifted education. We believe greater use of large-scale collaboration will help the field overcome some of its methodological challenges to help provide more precise and accurate information about gifted education.


eLife ◽  
2016 ◽  
Vol 5 ◽  
Author(s):  
Erin C McKiernan ◽  
Philip E Bourne ◽  
C Titus Brown ◽  
Stuart Buck ◽  
Amye Kenall ◽  
...  

Open access, open data, open source and other open scholarship practices are growing in popularity and necessity. However, widespread adoption of these practices has not yet been achieved. One reason is that researchers are uncertain about how sharing their work will affect their careers. We review literature demonstrating that open research is associated with increases in citations, media attention, potential collaborators, job opportunities and funding opportunities. These findings are evidence that open research practices bring significant benefits to researchers relative to more traditional closed practices.


2020 ◽  
Author(s):  
Emily Kate Farran ◽  
Priya Silverstein ◽  
Aminath A. Ameen ◽  
Iliana Misheva ◽  
Camilla Gilmore

Open research is best described as “an umbrella term used to refer to the concepts of openness, transparency, rigor, reproducibility, replicability, and accumulation of knowledge” (Crüwell et al., 2019, p. 3). Although a lot of open research practices have commonly been discussed under the term “open science”, open research applies to all disciplines. If the concept of open research is new to you, it might be difficult for you to determine how you can apply open research practices to your research. The aim of this document is to provide resources and examples of open research practices that are relevant to your discipline. The document lists case studies of open research per discipline, and resources per discipline (organised as: general, open methods, open data, open output and open education).


2019 ◽  
Vol 6 (12) ◽  
pp. 190738 ◽  
Author(s):  
Jerome Olsen ◽  
Johanna Mosen ◽  
Martin Voracek ◽  
Erich Kirchler

The replicability of research findings has recently been disputed across multiple scientific disciplines. In constructive reaction, the research culture in psychology is facing fundamental changes, but investigations of research practices that led to these improvements have almost exclusively focused on academic researchers. By contrast, we investigated the statistical reporting quality and selected indicators of questionable research practices (QRPs) in psychology students' master's theses. In a total of 250 theses, we investigated utilization and magnitude of standardized effect sizes, along with statistical power, the consistency and completeness of reported results, and possible indications of p -hacking and further testing. Effect sizes were reported for 36% of focal tests (median r = 0.19), and only a single formal power analysis was reported for sample size determination (median observed power 1 − β = 0.67). Statcheck revealed inconsistent p -values in 18% of cases, while 2% led to decision errors. There were no clear indications of p -hacking or further testing. We discuss our findings in the light of promoting open science standards in teaching and student supervision.


Author(s):  
Bonnie MacKellar ◽  
Christina Schweikert ◽  
Soon Ae Chun

Patients often want to participate in relevant clinical trials for new or more effective alternative treatments. The clinical search system made available by the NIH is a step forward to support the patient's decision making, but, it is difficult to use and requires the patient to sift through lengthy text descriptions for relevant information. In addition, patients deciding whether to pursue a given trial often want more information, such as drug information. The authors' overall aim is to develop an intelligent patient-centered clinical trial decision support system. Their approach is to integrate Open Data sources related to clinical trials using the Semantic Web's Linked Data framework. The linked data representation, in terms of RDF triples, allows the development of a clinical trial knowledge base that includes entities from different open data sources and relationships among entities. The authors consider Open Data sources such as clinical trials provided by NIH as well as the drug side effects dataset SIDER. The authors use UMLS (Unified Medical Language System) to provide consistent semantics and ontological knowledge for clinical trial related entities and terms. The authors' semantic approach is a step toward a cognitive system that provides not only patient-centered integrated data search but also allows automated reasoning in search, analysis and decision making using the semantic relationships embedded in the Linked data. The authors present their integrated clinical trial knowledge base development and a prototype, patient-centered Clinical Trial Decision Support System that include capabilities of semantic search and query with reasoning ability, and semantic-link browsing where an exploration of one concept leads to other concepts easily via links which can provide visual search for the end users.


2020 ◽  
Vol 32 (4) ◽  
pp. 183-190
Author(s):  
William Newton Suter

This article focuses on questionable research practices (QRPs) that bias findings and conclusions. QRPs cast doubt on the credibility of research findings in home health and nursing science in general. They assault the research integrity of all researchers to the extent they are permitted to exist at all. Each QRP is defined via bundles of specific research behaviors with unifying labels that include deceptive mirages and phantom sharpshooters among others. These questionable behaviors are described in ways that enhance research understanding and enable QRP avoidance by careful home health nurse researchers using higher standards of scientific rigor. QRPs impede scientific progress by generating false conclusions. They threaten the validity and dependability of scientific research and confuse other researchers who practice rigorous science and maintain integrity. QRPs also clog the literature with studies that cannot be replicated. When researchers engage in QRPs at the expense of rigor, overall trust in the scientific knowledge base erodes.


2020 ◽  
Vol 54 (22) ◽  
pp. 1365-1371
Author(s):  
Fionn Büttner ◽  
Elaine Toomey ◽  
Shane McClean ◽  
Mark Roe ◽  
Eamonn Delahunt

Questionable research practices (QRPs) are intentional and unintentional practices that can occur when designing, conducting, analysing, and reporting research, producing biased study results. Sport and exercise medicine (SEM) research is vulnerable to the same QRPs that pervade the biomedical and psychological sciences, producing false-positive results and inflated effect sizes. Approximately 90% of biomedical research reports supported study hypotheses, provoking suspicion about the field-wide presence of systematic biases to facilitate study findings that confirm researchers’ expectations. In this education review, we introduce three common QRPs (ie, HARKing, P-hacking and Cherry-picking), perform a cross-sectional study to assess the proportion of original SEM research that reports supported study hypotheses, and draw attention to existing solutions and resources to overcome QRPs that manifest in exploratory research. We hypothesised that ≥ 85% of original SEM research studies would report supported study hypotheses. Two independent assessors systematically identified, screened, included, and extracted study data from original research articles published between 1 January 2019 and 31 May 2019 in the British Journal of Sports Medicine, Sports Medicine, the American Journal of Sports Medicine, and the Journal of Orthopaedic & Sports Physical Therapy. We extracted data relating to whether studies reported that the primary hypothesis was supported or rejected by the results. Study hypotheses, methodologies, and analysis plans were preregistered at the Open Science Framework. One hundred and twenty-nine original research studies reported at least one study hypothesis, of which 106 (82.2%) reported hypotheses that were supported by study results. Of 106 studies reporting that primary hypotheses were supported by study results, 75 (70.8%) studies reported that the primary hypothesis was fully supported by study results. The primary study hypothesis was partially supported by study results in 28 (26.4%) studies. We detail open science practices and resources that aim to safe-guard against QRPs that bely the credibility and replicability of original research findings.


Author(s):  
Toby Prike

AbstractRecent years have seen large changes to research practices within psychology and a variety of other empirical fields in response to the discovery (or rediscovery) of the pervasiveness and potential impact of questionable research practices, coupled with well-publicised failures to replicate published findings. In response to this, and as part of a broader open science movement, a variety of changes to research practice have started to be implemented, such as publicly sharing data, analysis code, and study materials, as well as the preregistration of research questions, study designs, and analysis plans. This chapter outlines the relevance and applicability of these issues to computational modelling, highlighting the importance of good research practices for modelling endeavours, as well as the potential of provenance modelling standards, such as PROV, to help discover and minimise the extent to which modelling is impacted by unreliable research findings from other disciplines.


2019 ◽  
Author(s):  
Bryan G. Cook ◽  
John Wills Lloyd ◽  
William Therrien

Students with emotional and behavioral disorders (EBD) present some of the greatest challenges faced by educators, and experience some of the most problematic outcomes. To increase the likelihood that students with EBD will be successful in school and in life, practitioners should implement effective interventions. Trustworthy research is the primary means to identify effective practices. Open science can be used to help verify research findings as trustworthy, as well as improve their accessibility. In this article, we discuss the open science movement and describe five open-science practices (i.e., preregistration, Registered Reports, open data and materials, open access and preprints, and open review) that may help increase the trustworthiness, efficiency, and impact of EBD research. We argue that the implementation of these practices may increase the field’s capacity to identify and verify truly effective practices, and facilitate broad accessibility of the research for all stakeholders; thereby improving policies and instructional practice for students with EBD.


Sign in / Sign up

Export Citation Format

Share Document