scholarly journals 2557 Improving ClinicalTrials.gov compliance: A coordinated effort for success

2018 ◽  
Vol 2 (S1) ◽  
pp. 83-83
Author(s):  
Scott Patton ◽  
Elaine Basaca ◽  
Jennifer S. Brown

OBJECTIVES/SPECIFIC AIMS: ClinicalTrials.gov (CTgov) compliance has received much international attention as a significant regulatory, scientific, and ethical responsibility. Compliance rates for both industry and academia are held up for scrutiny by transparency advocates, but solutions for achieving compliance in academia have proven to be—because of its focus on innovation and multiple disciplines—significantly more complex than those employed by industry. Added challenges for academic medical centers (AMCs) are both increased researcher responsibilities under the new NIH Policy on Clinical Trial Dissemination and system-wide changes to requirements for “clinical trial only” Funding Opportunity Announcements. At Stanford University, a multifaceted approach toward improving CTgov outreach, education, and reporting led to a dramatic turnaround in compliance over 17-month period. METHODS/STUDY POPULATION: Stanford University School of Medicine’s Senior Associate Dean for Research and PI of Stanford’s CTSA applied a 3-part strategy to address unacceptable rates of results reporting. The strategy included (1) regular compliance reports to department chairs, (2) establishment of a central office, Clinical Research Quality (CRQ), to provide consistent training and support, and (3) interdepartmental cooperation across the school and university. Compliance reports, identifying all studies late for results reporting were sent monthly to all department chairs, with heightened focus on departments that conduct the most clinical trials. Senior leadership described the process in executive meetings and set improvement goals. Reports included multiple data points to help departments mobilize resources and identify trends; half-way through the period, soon-to-be late study records were included. CRQ hired 2 fulltime employees tasked with all aspects of managing the CTgov process and designed a portfolio of activities including: (1) a master list of all Stanford studies in the CTgov system; (2) a process for generating and distributing monthly reports; (3) an education program; and (4) support services, including an administrator working group. RESULTS/ANTICIPATED RESULTS: Since December 2015, Stanford has had the second-highest compliance rate improvement out of the 20 schools of medicine that receive the most NIH funding (+ 62%). DISCUSSION/SIGNIFICANCE OF IMPACT: Managing ClinicalTrials.gov compliance requires a high degree of technical knowledge of regulations, NIH policy, and the CTgov system. But without an equally high degree of engagement from senior leadership, results would not have been achieved. Central resources are critical to set policy and establish consistent processes, but without regular and repeated interactions between faculty, a multitude of administrators and staff, more central resources would have been required. By working simultaneously “down from the top” and “up from the bottom,” communication and education expanded rapidly, ineffective efforts were quickly transformed, and what began as an irritating and cumbersome problem became an occasion for collaboration and celebration of increased transparency.

BMJ Open ◽  
2017 ◽  
Vol 7 (9) ◽  
pp. e015110 ◽  
Author(s):  
Scott M Lassman ◽  
Olivia M Shopshear ◽  
Ina Jazic ◽  
Jocelyn Ulrich ◽  
Jeffrey Francer

ObjectiveTo evaluate the accuracy of a 2015 cross-sectional analysis published in theBMJ Openwhich reported that pharmaceutical industry compliance with clinical trial registration and results reporting requirements under US law was suboptimal and varied widely among companies.DesignWe performed a reassessment of the data reported in Milleret alto evaluate whether statutory compliance analyses and conclusions were valid.Data sourcesInformation from the Dryad Digital Repository, ClinicalTrials.gov, Drugs@FDA and direct communications with sponsors.Main outcome measuresCompliance with the clinical trial registration and results reporting requirements under the Food and Drug Administration Amendments Act (FDAAA).ResultsIndustry compliance with FDAAA disclosure requirements was notably higher than reported by Milleret al.Among trials subject to FDAAA, Milleret alreported that, per drug, a median of 67% (middle 50% range: 0%–100%) of trials fully complied with registration and results reporting requirements. On reanalysis of the data, we found that a median of 100% (middle 50% range: 93%–100%) of clinical trials for a particular drug fully complied with the law. When looking at overall compliance at the trial level, our reassessment yields 94% timely registration and 90% timely results reporting among the 49 eligible trials, and an overall FDAAA compliance rate of 86%.ConclusionsThe claim by Milleret althat industry compliance is below legal standards is based on an analysis that relies on an incomplete dataset and an interpretation of FDAAA that requires disclosure of study results for drugs that have not yet been approved for any indication. On reanalysis using a different interpretation of FDAAA that focuses on whether results were disclosed within 30 days of drug approval, we found that industry compliance with US statutory disclosure requirements for the 15 reviewed drugs was consistently high.


2021 ◽  
Vol 39 (3_suppl) ◽  
pp. 468-468
Author(s):  
Nirosha Perera ◽  
Marija Kamceva ◽  
Jolie Z. Shen ◽  
Siyou Song ◽  
Jessica Steinberg ◽  
...  

468 Background: The burden of gastrointestinal (GI) disease is high, costing over $97 billion annually in the United States (U.S.) alone. Yet the methodological rigor and characteristics of trials leading to guideline development are rarely explored. In 2007, the U.S. mandated that all interventional studies (Phase II-IV) register with ClinicalTrials.gov, the largest international clinical trial database. We characterized registered GI trials to identify features associated with early discontinuation, results reporting and increased methodological rigor. Methods: We employed a cross-sectional study design with descriptive, logistic regression, cox regression, time series and survival analyses. We downloaded data for 327 075 studies registered on the Aggregate Analysis of the ClinicalTrials.gov database from October 1, 2007 to December 31, 2019. Trials were excluded if registered prior to 2007 (n=38 111) or for non-interventional study design (n=69 233). After applying GI specific Medical Subject Heading terms to the remaining 219 731 trials, 22 339 trials were identified for manual review. 20 548 trials were found to contain true GI content, representing over seven million patients. Primary exposure variables were trial focus (disease process, anatomical location) and funding (industry, U.S. government, academic). Results: Of the 20 548 GI trials, 6.1% were funded by the U.S. government, 35.6% by industry, and 58.3% by academic institutions. The most studied disease process was neoplasia (42.6% of trials), followed by viral hepatitis (10.8%). The majority of neoplasia trials were funded by academic institutions (60.3%) and studied colorectal neoplasms (31.5%), followed by hepatic (17.9%), pancreatic (15.5%), gastric (12.8%), esophageal (10.6%) and biliary tract (4.9%) neoplasms. U.S. government funded trials had the lowest risk of early discontinuation (adjusted Hazard Ratio 0.63, 95% CI: 0.48-0.83, p<0.001) and the highest rates of results reporting (25%, X2 p<0.001). Among all trials, the majority did not report Data Monitoring Committee (DMC) oversight (58.6%). Only 12% of phase III trials employed a rigorous methodology, which we defined as being randomized, double blinded, multi-site, overseen by a DMC, and having enrolled ≥50 patients. Government sponsored trials had the highest proportion of trials meeting this definition (19%). Academic sponsored trials, constituting the majority of trials overall, had the lowest proportion (5.3%), in part due to not meeting the multi-site criteria. Conclusions: Despite constituting the minority of trials overall, U.S. government funded trials displayed the highest methodological rigor. Stakeholders can look to U.S. government funded trials as a model of improvement, but nevertheless must commit to increasing methodological rigor and results dissemination to strengthen trial findings that guide GI clinical recommendations.


2020 ◽  
Vol 18 ◽  
pp. 100557
Author(s):  
Sarah H. Snider ◽  
Patrick A. Flume ◽  
Stephanie L. Gentilin ◽  
Whitney A. Lesch ◽  
Royce R. Sampson ◽  
...  

BMJ ◽  
2019 ◽  
pp. l4217 ◽  
Author(s):  
Jennifer Miller ◽  
Joseph S Ross ◽  
Marc Wilenzick ◽  
Michelle M Mello

Abstract Objectives To develop and pilot a tool to measure and improve pharmaceutical companies’ clinical trial data sharing policies and practices. Design Cross sectional descriptive analysis. Setting Large pharmaceutical companies with novel drugs approved by the US Food and Drug Administration in 2015. Data sources Data sharing measures were adapted from 10 prominent data sharing guidelines from expert bodies and refined through a multi-stakeholder deliberative process engaging patients, industry, academics, regulators, and others. Data sharing practices and policies were assessed using data from ClinicalTrials.gov, Drugs@FDA, corporate websites, data sharing platforms and registries (eg, the Yale Open Data Access (YODA) Project and Clinical Study Data Request (CSDR)), and personal communication with drug companies. Main outcome measures Company level, multicomponent measure of accessibility of participant level clinical trial data (eg, analysis ready dataset and metadata); drug and trial level measures of registration, results reporting, and publication; company level overall transparency rankings; and feasibility of the measures and ranking tool to improve company data sharing policies and practices. Results Only 25% of large pharmaceutical companies fully met the data sharing measure. The median company data sharing score was 63% (interquartile range 58-85%). Given feedback and a chance to improve their policies to meet this measure, three companies made amendments, raising the percentage of companies in full compliance to 33% and the median company data sharing score to 80% (73-100%). The most common reasons companies did not initially satisfy the data sharing measure were failure to share data by the specified deadline (75%) and failure to report the number and outcome of their data requests. Across new drug applications, a median of 100% (interquartile range 91-100%) of trials in patients were registered, 65% (36-96%) reported results, 45% (30-84%) were published, and 95% (69-100%) were publicly available in some form by six months after FDA drug approval. When examining results on the drug level, less than half (42%) of reviewed drugs had results for all their new drug applications trials in patients publicly available in some form by six months after FDA approval. Conclusions It was feasible to develop a tool to measure data sharing policies and practices among large companies and have an impact in improving company practices. Among large companies, 25% made participant level trial data accessible to external investigators for new drug approvals in accordance with the current study’s measures; this proportion improved to 33% after applying the ranking tool. Other measures of trial transparency were higher. Some companies, however, have substantial room for improvement on transparency and data sharing of clinical trials.


Author(s):  
M.J. Ball

Abstract:After organizational involvement in a clinicopathological investigation of Alzheimer's Disease for a decade, the present appears an appropriate time to reflect upon both the major challenges encountered as well as the exciting opportunities presented by such a longitudinal study.Problematic areas have included: (a) brevity of research grant intervals (generally one- or two-year); (b) turnover of support personnel, as a consequence; (c) limited biostatistical and data management expertise dedicated to the Study objectives; (d) limited neuropsychological manpower in this specialized sphere; (e) “distillate” effect of postmortem retrieval, by which only some of the many clinical cases expire during any grant period, only some of those receive autopsy permission, only some of those demonstrate (pure) Alzheimer's Disease neuropathologically, and only some are harvested quickly enough for specialized (e.g. biochemical) analyses; (f) ensuring the scientific optimization of available tissue samples; and (g) paucity of cases dying in the early stages of the illness.Significant achievements include: (a) demonstration of the opportunities for young researchers committed to careers in behavioral neurology, psychogeriatrics or neurodegenerative pathology; (b) development of improved testing protocols for psychometric, electroencephalographic and neuroradiological evaluation of the demented elderly; (c) ethical enrolment both of a large cohort of Alzheimer patients and a sizeable normative (control) population; (d) public cooperation permitting a postmortem compliance rate exceeding 75%; (e) rapid autopsy retrieval times (50% < 6 hours); (f) utilization of human postmortem synaptosomal preparations for neurochemical investigations; (g) availability of fresh autopsy tissues for other specialized techniques (e.g. magnetic resonance spectroscopy, in situ hybridization); and (h) a collegial forum for the regular exchange of scientific data.While the challenges to be met are certainly not unique to our Study, the interdisciplinary and longitudinal nature of this approach could magnify their potentially retardatory effect upon research quality. By contrast, however, surmounting these hurdles enables the participant scientists to share in an incomparable opportunity for observational insights into the cellular and pathogenetic mechanisms underlying the cognitive decline of Alzheimer patients. The vigour with which my numerous collaborators at the University of Western Ontario meet such challenges may serve as a model for other Alzheimer centres where a similar research system is likewise expected to justify the anticipation of its supportive funding agencies, and of the patients whom we are pledged to comfort.


2019 ◽  
Vol 3 (s1) ◽  
pp. 126-126
Author(s):  
Oswald Tetteh ◽  
Aliya Lalji ◽  
Prince Samuel Nuamah ◽  
Anthony Keyes

OBJECTIVES/SPECIFIC AIMS: The Johns Hopkins University Clinicaltrials.gov (CT.gov) Program has previously reported on a study showing reduction of “Late Results – per FDAAA” from 111 to 0. What we hope to do here is to focus on non-late results records. Over the years, some institutions spend their efforts solely on late results in order to avoid any penalties from the Food and Drug Administration (FDA). However, there are a number of variables that labels “problem records” within the Protocol and Registration System (PRS). These records are also subject to penalties. Our goal has been to minimize problem records and establish processes to improve and maintain our institutional compliance in regards to regulations governing clinical trials registration and results reporting. METHODS/STUDY POPULATION: The Johns Hopkins University implemented a Clinicaltrials.gov program solely mandated to assist Principal Investigators (PIs) and other study team members with clinical trial registration and results reporting. The program has developed processes in its duty towards reducing problem records in the PRS. Full-time staff have been assigned to assist research teams with registration and results reporting, while ensuring compliance with all relevant regulations. Several methods have been utilized to track metrics, such as monthly reports and internal databases. Features within the PRS have also been used to draw attention to newly-identified problem records on a daily basis in order to rectify these issues with the study team promptly. In order to ensure compliance, our office communicates with study teams regarding the problems within their CT.gov record that requires attention. In challenging cases, our program will also collaborate with the CT.gov PRS Team at the NIH to facilitate the process and avoid multiple review cycles, which can delay registration or the posting of results. Our Program has also formed internal collaborations with the Institutional Review Board (IRB) which allows us to verify study status and view active study team members. This is especially useful in cases where the study team members who are listed on the CT.gov record cannot be reached or the contact information is outdated (a common occurrence with older studies). With access in the IRB, we can contact the current study team members who may not be listed in CT.gov and assist them to resolve any outstanding issues of non-compliance within their CT.gov record. RESULTS/ANTICIPATED RESULTS: From September 2015 (before our program was established) to September 2016 (three months after the institution of our program), the total amount of problem records increased from 44% (339/774) to 45% (383/852). Since then, the processes we have developed resulted in a decline in problem records to 30% (282/955) in September 2017, and a further decline to 8% (83/1075) as of September 2018. The short rise that was observed in 2016, was a potential indicator that if our program was not instituted, it would have been more difficult to maintain compliance. DISCUSSION/SIGNIFICANCE OF IMPACT: According to the FDA Draft Guidance released in September 2018 referring to the Civil Money Penalties Relating to the ClinicalTrials.gov Data Bank, there are a number of ways to violate the FDA regulations, resulting in potential monetary penalties, which include “failing to submit required clinical trial information or submitting clinical trial information that is false or misleading”. These regulations apply to results as well as registration and study status updates. By paying attention to all problems that are identified by the PRS, institutions can rectify errors and remain complaint with all regulations that govern clinical trial registration and results reporting.


2016 ◽  
Vol 77 ◽  
pp. 78-83 ◽  
Author(s):  
Bethany Withycombe ◽  
Mac Ovenell ◽  
Amanda Meeker ◽  
Sharia M. Ahmed ◽  
Daniel M. Hartung

2018 ◽  
Vol 2 (S1) ◽  
pp. 84-84
Author(s):  
Anthony Keyes ◽  
Nidhi M. Atri ◽  
Prince S. Nuamah

OBJECTIVES/SPECIFIC AIMS: Educate the general public, investigators, and institutional leadership on the importance of clinical trial registration and results reporting. Share success as a means to develop national best practices. METHODS/STUDY POPULATION: Developed a Project Charter; Spoke to several peer institutions; Update institutional policy. RESULTS/ANTICIPATED RESULTS: Since launching the Program in June 2016, the number of records submitted to ClinicalTrials.gov has increased 14% (852–971). At the same time, compliance with late results has increased by over 92% (111–9). DISCUSSION/SIGNIFICANCE OF IMPACT: Clinical Trial registration and results reporting is sub-par at many institutions. We have established a successful program that others can emulate. Institutions can increase transparency of clinical trials as well as prevent civil monetary penalties ($11,569/d/study) and loss of grant funding.


Sign in / Sign up

Export Citation Format

Share Document