Data quality in healthcare: A report of practical experience with the Canadian Primary Care Sentinel Surveillance Network data

2019 ◽  
Vol 50 (1-2) ◽  
pp. 88-92
Author(s):  
Behrouz Ehsani-Moghaddam ◽  
Ken Martin ◽  
John A Queenan

Data quality (DQ) is the degree to which a given dataset meets a user’s requirements. In the primary healthcare setting, poor quality data can lead to poor patient care, negatively affect the validity and reproducibility of research results and limit the value that such data may have for public health surveillance. To extract reliable and useful information from a large quantity of data and to make more effective and informed decisions, data should be as clean and free of errors as possible. Moreover, because DQ is defined within the context of different user requirements that often change, DQ should be considered to be an emergent construct. As such, we cannot expect that a sufficient level of DQ will last forever. Therefore, the quality of clinical data should be constantly assessed and reassessed in an iterative fashion to ensure that appropriate levels of quality are sustained in an acceptable and transparent manner. This document is based on our hands-on experiences dealing with DQ improvement for the Canadian Primary Care Sentinel Surveillance Network database. The DQ dimensions that are discussed here are accuracy and precision, completeness and comprehensiveness, consistency, timeliness, uniqueness, data cleaning and coherence.

Electronics ◽  
2021 ◽  
Vol 10 (17) ◽  
pp. 2049
Author(s):  
Kennedy Edemacu ◽  
Jong Wook Kim

Nowadays, the internet of things (IoT) is used to generate data in several application domains. A logistic regression, which is a standard machine learning algorithm with a wide application range, is built on such data. Nevertheless, building a powerful and effective logistic regression model requires large amounts of data. Thus, collaboration between multiple IoT participants has often been the go-to approach. However, privacy concerns and poor data quality are two challenges that threaten the success of such a setting. Several studies have proposed different methods to address the privacy concern but to the best of our knowledge, little attention has been paid towards addressing the poor data quality problems in the multi-party logistic regression model. Thus, in this study, we propose a multi-party privacy-preserving logistic regression framework with poor quality data filtering for IoT data contributors to address both problems. Specifically, we propose a new metric gradient similarity in a distributed setting that we employ to filter out parameters from data contributors with poor quality data. To solve the privacy challenge, we employ homomorphic encryption. Theoretical analysis and experimental evaluations using real-world datasets demonstrate that our proposed framework is privacy-preserving and robust against poor quality data.


2015 ◽  
Vol 106 (5) ◽  
pp. e283-e289 ◽  
Author(s):  
Alanna V. Rigobon ◽  
Richard Birtwhistle ◽  
Shahriar Khan ◽  
David Barber ◽  
Suzanne Biro ◽  
...  

2006 ◽  
Vol 21 (1) ◽  
pp. 67-70 ◽  
Author(s):  
Brian H. Toby

The definitions for important Rietveld error indices are defined and discussed. It is shown that while smaller error index values indicate a better fit of a model to the data, wrong models with poor quality data may exhibit smaller values error index values than some superb models with very high quality data.


2019 ◽  
pp. 23-34
Author(s):  
Harvey Goldstein ◽  
Ruth Gilbert

his chapter addresses data linkage which is key to using big administrative datasets to improve efficient and equitable services and policies. These benefits need to weigh against potential harms, which have mainly focussed on privacy. In this chapter we argue for the public and researchers to be alert also to other kinds of harms. These include misuses of big administrative data through poor quality data, misleading analyses, misinterpretation or misuse of findings, and restrictions limiting what questions can be asked and by whom, resulting in research not achieved and advances not made for the public benefit. Ensuring that big administrative data are validly used for public benefit requires increased transparency about who has access and whose access is denied, how data are processed, linked and analysed, and how analyses or algorithms are used in public and private services. Public benefits and especially trust require replicable analyses by many researchers not just a few data controllers. Wider use of big data will be helped by establishing a number of safe data repositories, fully accessible to researchers and their tools, and independent of the current monopolies on data processing, linkage, enhancement and uses of data.


Sign in / Sign up

Export Citation Format

Share Document