scholarly journals Canadian Approaches to Optimizing Quality of Administrative Data for Health System Use, Research, and Linkage

Author(s):  
Catherine Eastwood ◽  
Keith Denny ◽  
Maureen Kelly ◽  
Hude Quan

Theme: Data and Linkage QualityObjectives: To define health data quality from clinical, data science, and health system perspectives To describe some of the international best practices related to quality and how they are being applied to Canada’s administrative health data. To compare methods for health data quality assessment and improvement in Canada (automated logical checks, chart quality indicators, reabstraction studies, coding manager perspectives) To highlight how data linkage can be used to provide new insights into the quality of original data sources To highlight current international initiatives for improving coded data quality including results from current ICD-11 field trials Dr. Keith Denny: Director of Clinical Data Standards and Quality, Canadian Insititute for Health Information (CIHI), Adjunct Research Professor, Carleton University, Ottawa, ON. He provides leadership for CIHI’s information quality initiatives and for the development and application of clinical classifications and terminology standards. Maureen Kelly: Manager of Information Quality at CIHI, Ottawa, ON. She leads CIHI’s corporate quality program that is focused on enhancing the quality of CIHI’s data sources and information products and to fostering CIHI’s quality culture. Dr. Cathy Eastwood: Scientific Manager, Associate Director of Alberta SPOR Methods & Development Platform, Community Health Sciences, Cumming School of Medicine, University of Calgary, Calgary, AB. She has expertise in clinical data collection, evaluation of local and systemic data quality issues, disease classification coding with ICD-10 and ICD-11. Dr. Hude Quan: Professor, Community Health Sciences, Cumming School of Medicine, University of Calgary, Director Alberta SPOR Methods Platform; Co-Chair of Hypertension Canada, Co-Chair of Person to Population Health Collaborative of the Libin Cardiovascular Institute in Calgary, AB. He has expertise in assessing, validating, and linking administrative data sources for conducting data science research including artificial intelligence methods for evaluating and improving data quality. Intended Outcomes:“What is quality health data?” The panel of experts will address this common question by discussing how to define high quality health data, and measures being taken to ensure that they are available in Canada. Optimizing the quality of clinical-administrative data, and their use-value, first requires an understanding of the processes used to create the data. Subsequently, we can address the limitations in data collection and use these data for diverse applications. Current advances in digital data collection are providing more solutions to improve health data quality at lower cost. This panel will describe a number of quality assessment and improvement initiatives aimed at ensuring that health data are fit for a range of secondary uses including data linkage. It will also discuss how the need for the linkage and integration of data sources can influence the views of the data source’s fitness for use. CIHI content will include: Methods for optimizing the value of clinical-administrative data CIHI Information Quality Framework Reabstraction studies (e.g. physician documentation/coders’ experiences) Linkage analytics for data quality University of Calgary content will include: Defining/measuring health data quality Automated methods for quality assessment and improvement ICD-11 features and coding practices Electronic health record initiatives

2015 ◽  
Vol 31 (2) ◽  
pp. 231-247 ◽  
Author(s):  
Matthias Schnetzer ◽  
Franz Astleithner ◽  
Predrag Cetkovic ◽  
Stefan Humer ◽  
Manuela Lenk ◽  
...  

Abstract This article contributes a framework for the quality assessment of imputations within a broader structure to evaluate the quality of register-based data. Four quality-related hyperdimensions examine the data processing from the raw-data level to the final statistics. Our focus lies on the quality assessment of different imputation steps and their influence on overall data quality. We suggest classification rates as a measure of accuracy of imputation and derive several computational approaches.


2016 ◽  
Vol 39 (4) ◽  
Author(s):  
Christopher Berka ◽  
Stefan Humer ◽  
Manuela Lenk ◽  
Mathias Moser ◽  
Henrik Rechta ◽  
...  

Along with the implementation of a register-based census we develop a methodological framework to assess administrative data sources for statistical use. Key aspects for the quality of these data are identified in the context of hyperdimensions and embedded into a process flow. Based on this approach we develop a structural quality framework and suggest a concept for quality assessment and several quality measures.


2012 ◽  
Vol 51 (1) ◽  
pp. 5-16
Author(s):  
James J. Brown ◽  
Oksana Honchar

  National Statistics Institutes (NSIs) have been increasingly seeking to replace or enhance traditional survey-baseddata sources with administrative data sources; with the aim to improve overall quality in the absence of a definitive register ofthe population. The Beyond 2011 Census Programme in England and Wales is an example of looking to replace a traditionalcensus with administrative data collected for another purpose by a different organisation, when there is no definitive registeras a starting point. There are also similar projects across NSIs within the area of business surveys looking to useadministrative sources to reduce cost and burden. In this paper we start with considering all aspects of a quality frameworkfor administrative data and then focus on the elements relevant to data quality such as accuracy and coherence. We fit theseconcepts into the framework for total survey error highlighting the components an NSI needs to measure to produce estimatesbased on the administrative data. We then explore the use of both dependent and independent quality surveys to adjust theadministrative data for ‘measurement’ and ‘coverage’ aspects to improve the quality of estimates produced from theadministrative data.


2019 ◽  
Vol 181 ◽  
pp. 104824 ◽  
Author(s):  
Roberto Álvarez Sánchez ◽  
Andoni Beristain Iraola ◽  
Gorka Epelde Unanue ◽  
Paul Carlin

2022 ◽  
Vol 80 (1) ◽  
Author(s):  
Brigid Unim ◽  
Eugenio Mattei ◽  
Flavia Carle ◽  
Hanna Tolonen ◽  
Enrique Bernal-Delgado ◽  
...  

Abstract Background Health-related data are collected from a variety of sources for different purposes, including secondary use for population health monitoring (HM) and health system performance assessment (HSPA). Most of these data sources are not included in databases of international organizations (e.g., WHO, OECD, Eurostat), limiting their use for research activities and policy making. This study aims at identifying and describing collection methods, quality assessment procedures, availability and accessibility of health data across EU Member States (MS) for HM and HSPA. Methods A structured questionnaire was developed and administered through an online platform to partners of the InfAct consortium form EU MS to investigate data collections applied in HM and HSPA projects, as well as their methods and procedures. A descriptive analysis of the questionnaire results was performed. Results Information on 91 projects from 18 EU MS was collected. In these projects, data were mainly collected through administrative sources, population health interview or health examination surveys and from electronic medical records. Tools and methods used for data collection were mostly mandatory reports, self-administered questionnaires, or record linkage of various data sources. One-third of the projects shared data with EU research networks and less than one-third performed quality assessment of their data collection procedures using international standardized criteria. Macrodata were accessible via open access and reusable in 22 projects. Microdata were accessible upon specific request and reusable in 15 projects based on data usage licenses. Metadata was available for the majority of the projects, but followed reporting standards only in 29 projects. Overall, compliance to FAIR Data principles (Findable, Accessible, Interoperable, and Reusable) was not optimal across the EU projects. Conclusions Data collection and exchange procedures differ across EU MS and research data are not always available, accessible, comparable or reusable for further research and evidence-based policy making. There is a need for an EU-level health information infrastructure and governance to promote and facilitate sharing and dissemination of standardized and comparable health data, following FAIR Data principles, across the EU.


2017 ◽  
Vol 4 (1) ◽  
pp. 25-31 ◽  
Author(s):  
Diana Effendi

Information Product Approach (IP Approach) is an information management approach. It can be used to manage product information and data quality analysis. IP-Map can be used by organizations to facilitate the management of knowledge in collecting, storing, maintaining, and using the data in an organized. The  process of data management of academic activities in X University has not yet used the IP approach. X University has not given attention to the management of information quality of its. During this time X University just concern to system applications used to support the automation of data management in the process of academic activities. IP-Map that made in this paper can be used as a basis for analyzing the quality of data and information. By the IP-MAP, X University is expected to know which parts of the process that need improvement in the quality of data and information management.   Index term: IP Approach, IP-Map, information quality, data quality. REFERENCES[1] H. Zhu, S. Madnick, Y. Lee, and R. Wang, “Data and Information Quality Research: Its Evolution and Future,” Working Paper, MIT, USA, 2012.[2] Lee, Yang W; at al, Journey To Data Quality, MIT Press: Cambridge, 2006.[3] L. Al-Hakim, Information Quality Management: Theory and Applications. Idea Group Inc (IGI), 2007.[4] “Access : A semiotic information quality framework: development and comparative analysis : Journal ofInformation Technology.” [Online]. Available: http://www.palgravejournals.com/jit/journal/v20/n2/full/2000038a.html. [Accessed: 18-Sep-2015].[5] Effendi, Diana, Pengukuran Dan Perbaikan Kualitas Data Dan Informasi Di Perguruan Tinggi MenggunakanCALDEA Dan EVAMECAL (Studi Kasus X University), Proceeding Seminar Nasional RESASTEK, 2012, pp.TIG.1-TI-G.6.


2021 ◽  
pp. 1-22
Author(s):  
Emily Berg ◽  
Johgho Im ◽  
Zhengyuan Zhu ◽  
Colin Lewis-Beck ◽  
Jie Li

Statistical and administrative agencies often collect information on related parameters. Discrepancies between estimates from distinct data sources can arise due to differences in definitions, reference periods, and data collection protocols. Integrating statistical data with administrative data is appealing for saving data collection costs, reducing respondent burden, and improving the coherence of estimates produced by statistical and administrative agencies. Model based techniques, such as small area estimation and measurement error models, for combining multiple data sources have benefits of transparency, reproducibility, and the ability to provide an estimated uncertainty. Issues associated with integrating statistical data with administrative data are discussed in the context of data from Namibia. The national statistical agency in Namibia produces estimates of crop area using data from probability samples. Simultaneously, the Namibia Ministry of Agriculture, Water, and Forestry obtains crop area estimates through extension programs. We illustrate the use of a structural measurement error model for the purpose of synthesizing the administrative and survey data to form a unified estimate of crop area. Limitations on the available data preclude us from conducting a genuine, thorough application. Nonetheless, our illustration of methodology holds potential use for a general practitioner.


Author(s):  
Christopher D O’Connor ◽  
John Ng ◽  
Dallas Hill ◽  
Tyler Frederick

Policing is increasingly being shaped by data collection and analysis. However, we still know little about the quality of the data police services acquire and utilize. Drawing on a survey of analysts from across Canada, this article examines several data collection, analysis, and quality issues. We argue that as we move towards an era of big data policing it is imperative that police services pay more attention to the quality of the data they collect. We conclude by discussing the implications of ignoring data quality issues and the need to develop a more robust research culture in policing.


Author(s):  
Syed Mustafa Ali ◽  
Farah Naureen ◽  
Arif Noor ◽  
Maged Kamel N. Boulos ◽  
Javariya Aamir ◽  
...  

Background Increasingly, healthcare organizations are using technology for the efficient management of data. The aim of this study was to compare the data quality of digital records with the quality of the corresponding paper-based records by using data quality assessment framework. Methodology We conducted a desk review of paper-based and digital records over the study duration from April 2016 to July 2016 at six enrolled TB clinics. We input all data fields of the patient treatment (TB01) card into a spreadsheet-based template to undertake a field-to-field comparison of the shared fields between TB01 and digital data. Findings A total of 117 TB01 cards were prepared at six enrolled sites, whereas just 50% of the records (n=59; 59 out of 117 TB01 cards) were digitized. There were 1,239 comparable data fields, out of which 65% (n=803) were correctly matched between paper based and digital records. However, 35% of the data fields (n=436) had anomalies, either in paper-based records or in digital records. 1.9 data quality issues were calculated per digital patient record, whereas it was 2.1 issues per record for paper-based record. Based on the analysis of valid data quality issues, it was found that there were more data quality issues in paper-based records (n=123) than in digital records (n=110). Conclusion There were fewer data quality issues in digital records as compared to the corresponding paper-based records. Greater use of mobile data capture and continued use of the data quality assessment framework can deliver more meaningful information for decision making.


Sign in / Sign up

Export Citation Format

Share Document