A Source Data Verification-Based Data Quality Analysis Within the Network of a German Comprehensive Cancer Center

Author(s):  
Martina Borner ◽  
Diana Schweizer ◽  
Theres Fey ◽  
Daniel Nasseh ◽  
Robert Dengler
2019 ◽  
Author(s):  
Jasper Frese ◽  
Annalice Gode ◽  
Gerhard Heinrichs ◽  
Armin Will ◽  
Arndt-Peter Schulz

Abstract Aim Subsequent to a three-month pilot phase, recruiting patients for the newly established BFCC (Baltic Fracture Competence Centre) transnational fracture registry, a validation of the data quality needed to be carried out, applying a standardized method.Method During the literature research, the method of “adaptive monitoring” fulfilled the requirements of the registry and was applied. It consisted of a three-step audit process; firstly, scoring of the overall data quality, followed by source data verification of a sample size, relative to the scoring result, and finally, feedback to the registry on measures to improve data quality. Statistical methods for scoring of data quality and visualisation of discrepancies between registry data and source data were developed and applied.Results Initially, the data quality of the registry scored as medium. During source data verification, missing items in the registry, causing medium data quality, turned out to be absent in the source as well. A subsequent adaptation of the score evaluated the registry’s data quality as good. It was suggested to add variables to some items in order to improve the accuracy of the registry.Discussion The application of the method of adaptive monitoring has only been published by Jacke et al., with a similar improvement of the scoring result following the audit process. Displaying data from the registry in graphs helped to find missing items and discover issues with data formats. Graphically comparing the degree of agreement between the registry and source data allowed to discover systematic faults.Conclusions The method of adaptive monitoring gives a substantiated guideline for systematically evaluating and monitoring a registry’s data quality and is currently second to none. The resulting transparency of the registry’s data quality could be helpful in annual reports, as published by most major registries. As the method has been rarely applied, further successive applications in established registries would be desirable.


2015 ◽  
Vol 79 (4) ◽  
pp. 660-668 ◽  
Author(s):  
Jeppe Ragnar Andersen ◽  
Inger Byrjalsen ◽  
Asger Bihlet ◽  
Faidra Kalakou ◽  
Hans Christian Hoeck ◽  
...  

2021 ◽  
pp. 1-6
Author(s):  
Joelle A. Pettus ◽  
Amy L. Pajk ◽  
Andrew C. Glatz ◽  
Christopher J. Petit ◽  
Bryan H. Goldstein ◽  
...  

Abstract Background: Multicentre research databases can provide insights into healthcare processes to improve outcomes and make practice recommendations for novel approaches. Effective audits can establish a framework for reporting research efforts, ensuring accurate reporting, and spearheading quality improvement. Although a variety of data auditing models and standards exist, barriers to effective auditing including costs, regulatory requirements, travel, and design complexity must be considered. Materials and methods: The Congenital Cardiac Research Collaborative conducted a virtual data training initiative and remote source data verification audit on a retrospective multicentre dataset. CCRC investigators across nine institutions were trained to extract and enter data into a robust dataset on patients with tetralogy of Fallot who required neonatal intervention. Centres provided de-identified source files for a randomised 10% patient sample audit. Key auditing variables, discrepancy types, and severity levels were analysed across two study groups, primary repair and staged repair. Results: Of the total 572 study patients, data from 58 patients (31 staged repairs and 27 primary repairs) were source data verified. Amongst the 1790 variables audited, 45 discrepancies were discovered, resulting in an overall accuracy rate of 97.5%. High accuracy rates were consistent across all CCRC institutions ranging from 94.6% to 99.4% and were reported for both minor (1.5%) and major discrepancies type classifications (1.1%). Conclusion: Findings indicate that implementing a virtual multicentre training initiative and remote source data verification audit can identify data quality concerns and produce a reliable, high-quality dataset. Remote auditing capacity is especially important during the current COVID-19 pandemic.


2019 ◽  
Author(s):  
Jasper Frese ◽  
Annalice Gode ◽  
Gerhard Heinrichs ◽  
Armin Will ◽  
Arndt-Peter Schulz

Abstract Aim Subsequent to a three-month pilot phase, recruiting patients for the newly established BFCC (Baltic Fracture Competence Centre) transnational fracture registry, a validation of the data quality needed to be carried out, applying a standardized method.Method During the literature research, the method of “adaptive monitoring” fulfilled the requirements of the registry and was applied. It consisted of a three-step audit process; firstly, scoring of the overall data quality, followed by source data verification of a sample size, relative to the scoring result, and finally, feedback to the registry on measures to improve data quality. Statistical methods for scoring of data quality and visualisation of discrepancies between registry data and source data were developed and applied.Results Initially, the data quality of the registry scored as medium. During source data verification, missing items in the registry, causing medium data quality, turned out to be absent in the source as well. A subsequent adaptation of the score evaluated the registry’s data quality as good. It was suggested to add variables to some items in order to improve the accuracy of the registry.Discussion The application of the method of adaptive monitoring has only been published by Jacke et al., with a similar improvement of the scoring result following the audit process. Displaying data from the registry in graphs helped to find missing items and discover issues with data formats. Graphically comparing the degree of agreement between the registry and source data allowed to discover systematic faults.Conclusions The method of adaptive monitoring gives a substantiated guideline for systematically evaluating and monitoring a registry’s data quality and is currently second to none. The resulting transparency of the registry’s data quality could be helpful in annual reports, as published by most major registries. As the method has been rarely applied, further successive applications in established registries would be desirable.


2019 ◽  
Vol 19 (1) ◽  
Author(s):  
Jasper Frese ◽  
Annalice Gode ◽  
Gerhard Heinrichs ◽  
Armin Will ◽  
Arndt-Peter Schulz

Abstract Aim Subsequent to a three-month pilot phase, recruiting patients for the newly established BFCC (Baltic Fracture Competence Centre) transnational fracture registry, a validation of the data quality needed to be carried out, applying a standardized method. Method During the literature research, the method of “adaptive monitoring” fulfilled the requirements of the registry and was applied. It consisted of a three-step audit process; firstly, scoring of the overall data quality, followed by source data verification of a sample size, relative to the scoring result, and finally, feedback to the registry on measures to improve data quality. Statistical methods for scoring of data quality and visualisation of discrepancies between registry data and source data were developed and applied. Results Initially, the data quality of the registry scored as medium. During source data verification, missing items in the registry, causing medium data quality, turned out to be absent in the source as well. A subsequent adaptation of the score evaluated the registry’s data quality as good. It was suggested to add variables to some items in order to improve the accuracy of the registry. Discussion The application of the method of adaptive monitoring has only been published by Jacke et al., with a similar improvement of the scoring result following the audit process. Displaying data from the registry in graphs helped to find missing items and discover issues with data formats. Graphically comparing the degree of agreement between the registry and source data allowed to discover systematic faults. Conclusions The method of adaptive monitoring gives a substantiated guideline for systematically evaluating and monitoring a registry’s data quality and is currently second to none. The resulting transparency of the registry’s data quality could be helpful in annual reports, as published by most major registries. As the method has been rarely applied, further successive applications in established registries would be desirable.


2019 ◽  
Author(s):  
Jasper Frese ◽  
Annalice Gode ◽  
Gerhard Heinrichs ◽  
Armin Will ◽  
Arndt-Peter Schulz

Abstract Aim Subsequent to a three-month pilot phase, recruiting patients for the newly established BFCC (Baltic Fracture Competence Centre) transnational fracture registry, a validation of the data quality needed to be carried out, applying a standardized method.Method During the literature research, the method of “adaptive monitoring” fulfilled the requirements of the registry and was applied. It consisted of a three-step audit process; firstly, scoring of the overall data quality, followed by source data verification of a sample size, relative to the scoring result, and finally, feedback to the registry on measures to improve data quality. Statistical methods for scoring of data quality and visualisation of discrepancies between registry data and source data were developed and applied.Results Initially, the data quality of the registry scored as medium. During source data verification, missing items in the registry, causing medium data quality, turned out to be absent in the source as well. A subsequent adaptation of the score evaluated the registry’s data quality as good. It was suggested to add variables to some items in order to improve the accuracy of the registry.Discussion The application of the method of adaptive monitoring has only been published by Jacke et al., with a similar improvement of the scoring result following the audit process. Displaying data from the registry in graphs helped to find missing items and discover issues with data formats. Graphically comparing the degree of agreement between the registry and source data allowed to discover systematic faults.Conclusions The method of adaptive monitoring gives a substantiated guideline for systematically evaluating and monitoring a registry’s data quality and is currently second to none. The resulting transparency of the registry’s data quality could be helpful in annual reports, as published by most major registries. As the method has been rarely applied, further successive applications in established registries would be desirable.


2019 ◽  
Author(s):  
Jasper Frese ◽  
Annalice Gode ◽  
Gerhard Heinrichs ◽  
Armin Will ◽  
Arndt-Peter Schulz

Abstract Aim Subsequent to a three-month pilot phase, recruiting patients for the newly established BFCC (Baltic Fracture Competence Centre) transnational fracture registry, a validation of the data quality needed to be carried out, applying a standardized method. Method During literature research, the method of “adaptive monitoring” fulfilled the requirements of the registry and was applied. It consisted of a three-step audit process; firstly, scoring of the overall data quality, followed by source data verification of a sample- size, relative to the scoring result and finally, a feedback to the registry, on measures to improve data quality. Statistical methods for scoring of data quality and visualisation of discrepancies between registry data and source data were developed and applied. Results Initially, the data quality of the registry scored as medium. During source data verification, missing items in the registry, causing medium data quality, turned out to be absent in the source as well. A subsequent adaptation of the score evaluated the registry’s data quality as good. It was suggested to add variables to some items, in order to improve the accuracy of the registry. Discussion The method of adaptive monitoring had only been applied once before by Jacke et al. with a similar improvement of the scoring result, following the audit process. Displaying data from the registry in graphs helped finding missing items as well as discovering issues with data- formats. Graphically comparing the degree of agreement between registry and source data allowed to discover systematic faults. Conclusions The method of adaptive monitoring gives a substantiated guideline for systematically evaluating and monitoring a registry’s data quality and is currently second to none. The resulting transparency of the registry’s data quality could be helpful in annual reports, as published by most major registries. As the method has been rarely applied, further successive applications in established registries would be desirable.


Sign in / Sign up

Export Citation Format

Share Document