Current Status and Future Expectations of Using Remote Source Data Verification for Improving the Efficiency of Clinical Trials

Author(s):  
Akimasa YAMATANI ◽  
Kazuki INOUE ◽  
Kyoko MOCHIZUKI ◽  
Namiko MORI ◽  
Kazuhide SASANAMI ◽  
...  
2015 ◽  
Vol 79 (4) ◽  
pp. 660-668 ◽  
Author(s):  
Jeppe Ragnar Andersen ◽  
Inger Byrjalsen ◽  
Asger Bihlet ◽  
Faidra Kalakou ◽  
Hans Christian Hoeck ◽  
...  

2020 ◽  
pp. 174077452097125
Author(s):  
Osamu Yamada ◽  
Shih-Wei Chiu ◽  
Munenori Takata ◽  
Michiaki Abe ◽  
Mutsumi Shoji ◽  
...  

Background/Aims: Traditional on-site monitoring of clinical trials via frequent site visits and 100% source data verification is cost-consuming, and it still cannot guarantee data quality effectively. Depending on the types and designs of clinical trials, an alternative would be combining several monitoring methods, such as risk-based monitoring and remote monitoring. However, there is insufficient evidence of its effectiveness. This research compared the effectiveness of risk-based monitoring with a remote monitoring system with that of traditional on-site monitoring. Methods: With a cloud-based remote monitoring system called beagle View®, we created a remote risk-based monitoring methodology that focused only on critical data and processes. We selected a randomized controlled trial conducted at Tohoku University Hospital and randomly sampled 11 subjects whose case report forms had already been reviewed by data managers. Critical data and processes were verified retrospectively by remote risk-based monitoring; later, all data and processes were confirmed by on-site monitoring. We compared the ability of remote risk-based monitoring to detect critical data and process errors with that of on-site monitoring with 100% source data verification, including an examination of clinical trial staff workload and potential cost savings. Results: Of the total data points (n = 5617), 19.7% (n = 1105, 95% confidence interval = 18.7–20.7) were identified as critical. The error rates of critical data detected by on-site monitoring, remote risk-based monitoring, and data review by data managers were 7.6% (n = 84, 95% CI = 6.2–9.3), 7.6% (n = 84, 95% confidence interval = 6.2–9.3), and 3.9% (n = 43, 95% confidence interval = 2.9–5.2), respectively. The total number of critical process errors detected by on-site monitoring was 14. Of these 14, 92.9% (n = 13, 95% confidence interval = 68.5–98.7) and 42.9% (n = 6, 95% confidence interval = 21.4–67.4) of critical process errors were detected by remote risk-based monitoring and data review by data managers, respectively. The mean time clinical trial staff spent dealing with remote risk-based monitoring was 9.9 ± 5.3 (mean ± SD) min per visit per subject. Our calculations show that remote risk-based monitoring saved between 9 and 41 on-site monitoring visits, corresponding to a cost of between US$13,500 and US$61,500 per trial site. Conclusion: Remote risk-based monitoring was able to detect critical data and process errors as well as on-site monitoring with 100% source data verification, saving travel time and monitoring costs. Remote risk-based monitoring offers an effective alternative to traditional on-site monitoring of clinical trials.


2014 ◽  
Vol 48 (6) ◽  
pp. 671-680 ◽  
Author(s):  
Nicole Sheetz ◽  
Brett Wilson ◽  
Joanne Benedict ◽  
Esther Huffman ◽  
Andy Lawton ◽  
...  

Trials ◽  
2013 ◽  
Vol 14 (S1) ◽  
Author(s):  
J Athene Lane ◽  
Michael Davis ◽  
Elizabeth Down ◽  
Rhiannon Macefield ◽  
David Neal ◽  
...  

2019 ◽  
Author(s):  
Jasper Frese ◽  
Annalice Gode ◽  
Gerhard Heinrichs ◽  
Armin Will ◽  
Arndt-Peter Schulz

Abstract Aim Subsequent to a three-month pilot phase, recruiting patients for the newly established BFCC (Baltic Fracture Competence Centre) transnational fracture registry, a validation of the data quality needed to be carried out, applying a standardized method.Method During the literature research, the method of “adaptive monitoring” fulfilled the requirements of the registry and was applied. It consisted of a three-step audit process; firstly, scoring of the overall data quality, followed by source data verification of a sample size, relative to the scoring result, and finally, feedback to the registry on measures to improve data quality. Statistical methods for scoring of data quality and visualisation of discrepancies between registry data and source data were developed and applied.Results Initially, the data quality of the registry scored as medium. During source data verification, missing items in the registry, causing medium data quality, turned out to be absent in the source as well. A subsequent adaptation of the score evaluated the registry’s data quality as good. It was suggested to add variables to some items in order to improve the accuracy of the registry.Discussion The application of the method of adaptive monitoring has only been published by Jacke et al., with a similar improvement of the scoring result following the audit process. Displaying data from the registry in graphs helped to find missing items and discover issues with data formats. Graphically comparing the degree of agreement between the registry and source data allowed to discover systematic faults.Conclusions The method of adaptive monitoring gives a substantiated guideline for systematically evaluating and monitoring a registry’s data quality and is currently second to none. The resulting transparency of the registry’s data quality could be helpful in annual reports, as published by most major registries. As the method has been rarely applied, further successive applications in established registries would be desirable.


2021 ◽  
pp. 1-6
Author(s):  
Joelle A. Pettus ◽  
Amy L. Pajk ◽  
Andrew C. Glatz ◽  
Christopher J. Petit ◽  
Bryan H. Goldstein ◽  
...  

Abstract Background: Multicentre research databases can provide insights into healthcare processes to improve outcomes and make practice recommendations for novel approaches. Effective audits can establish a framework for reporting research efforts, ensuring accurate reporting, and spearheading quality improvement. Although a variety of data auditing models and standards exist, barriers to effective auditing including costs, regulatory requirements, travel, and design complexity must be considered. Materials and methods: The Congenital Cardiac Research Collaborative conducted a virtual data training initiative and remote source data verification audit on a retrospective multicentre dataset. CCRC investigators across nine institutions were trained to extract and enter data into a robust dataset on patients with tetralogy of Fallot who required neonatal intervention. Centres provided de-identified source files for a randomised 10% patient sample audit. Key auditing variables, discrepancy types, and severity levels were analysed across two study groups, primary repair and staged repair. Results: Of the total 572 study patients, data from 58 patients (31 staged repairs and 27 primary repairs) were source data verified. Amongst the 1790 variables audited, 45 discrepancies were discovered, resulting in an overall accuracy rate of 97.5%. High accuracy rates were consistent across all CCRC institutions ranging from 94.6% to 99.4% and were reported for both minor (1.5%) and major discrepancies type classifications (1.1%). Conclusion: Findings indicate that implementing a virtual multicentre training initiative and remote source data verification audit can identify data quality concerns and produce a reliable, high-quality dataset. Remote auditing capacity is especially important during the current COVID-19 pandemic.


Sign in / Sign up

Export Citation Format

Share Document