scholarly journals Russian Regional Politicians’ Income and Property Declarations: a Pilot Study and Quality Assessment of Administrative Data

2021 ◽  
Vol 27 (3) ◽  
pp. 8-34
Author(s):  
Tatyana Cherkashina

The article presents the experience of converting non-targeted administrative data into research data, using as an example data on the income and property of deputies from local legislative bodies of the Russian Federation for 2019, collected as part of anticorruption operations. This particular empirical fragment was selected for the pilot study of administrative data, which includes assessing the possibility of integrating scattered fragments of information into a single database, assessing quality of data and their relevance for solving research problems, particularly analysis of high-income strata and the apparent trends towards individualization of private property. The system of indicators for assessing data quality includes their timeliness, availability, interpretability, reliability, comparability, coherence, errors of representation and measurement, and relevance. In the case of the data set in question, measurement errors are more common than representation errors. Overall the article emphasizes the notion that introducing new non-target data into circulation requires their preliminary testing, while data quality assessment becomes distributed both in time and between different subjects. The transition from created data to «obtained» data shifts the functions of evaluating its quality from the researcher-creator to the researcheruser. And though in this case data quality is in part ensured by the legal support for their production, the transformation of administrative data into research data involves assessing a variety of quality measurements — from availability to uniformity and accuracy.

2017 ◽  
Vol 9 (1) ◽  
Author(s):  
Sophia Crossen

ObjectiveTo explore the quality of data submitted once a facility is movedinto an ongoing submission status and address the importance ofcontinuing data quality assessments.IntroductionOnce a facility meets data quality standards and is approved forproduction, an assumption is made that the quality of data receivedremains at the same level. When looking at production data qualityreports from various states generated using a SAS data qualityprogram, a need for production data quality assessment was identified.By implementing a periodic data quality update on all productionfacilities, data quality has improved for production data as a whole andfor individual facility data. Through this activity several root causesof data quality degradation have been identified, allowing processesto be implemented in order to mitigate impact on data quality.MethodsMany jurisdictions work with facilities during the onboardingprocess to improve data quality. Once a certain level of data qualityis achieved, the facility is moved into production. At this point thejurisdiction generally assumes that the quality of the data beingsubmitted will remain fairly constant. To check this assumption inKansas, a SAS Production Report program was developed specificallyto look at production data quality.A legacy data set is downloaded from BioSense production serversby Earliest Date in order to capture all records for visits which occurredwithin a specified time frame. This data set is then run through a SASdata quality program which checks specific fields for completenessand validity and prints a report on counts and percentages of null andinvalid values, outdated records, and timeliness of record submission,as well as examples of records from visits containing these errors.A report is created for the state as a whole, each facility, EHR vendor,and HIE sending data to the production servers, with examplesprovided only by facility. The facility, vendor, and HIE reportsinclude state percentages of errors for comparison.The Production Report was initially run on Kansas data for thefirst quarter of 2016 followed by consultations with facilities on thefindings. Monthly checks were made of data quality before and afterfacilities implemented changes. An examination of Kansas’ resultsshowed a marked decrease in data quality for many facilities. Everyfacility had at least one area in need of improvement.The data quality reports and examples were sent to every facilitysending production data during the first quarter attached to an emailrequesting a 30-60 minute call with each to go over the report. Thiscall was deemed crucial to the process since it had been over a year,and in a few cases over two years, since some of the facilities hadlooked at data quality and would need a review of the findings andall requirements, new and old. Ultimately, over half of all productionfacilities scheduled a follow-up call.While some facilities expressed some degree of trepidation, mostfacilities were open to revisiting data quality and to making requestedimprovements. Reasons for data quality degradation included updatesto EHR products, change of EHR product, work flow issues, engineupdates, new requirements, and personnel turnover.A request was made of other jurisdictions (including Arizona,Nevada, and Illinois) to look at their production data using the sameprogram and compare quality. Data was pulled for at least one weekof July 2016 by Earliest Date.ResultsMonthly reports have been run on Kansas Production data bothbefore and after the consultation meetings which indicate a markedimprovement in both completeness of required fields and validityof values in those fields. Data for these monthly reports was againselected by Earliest Date.ConclusionsIn order to ensure production data continues to be of value forsyndromic surveillance purposes, periodic data quality assessmentsshould continue after a facility reaches ongoing submission status.Alterations in process include a review of production data at leasttwice per year with a follow up data review one month later to confirmadjustments have been correctly implemented.


2015 ◽  
Vol 31 (2) ◽  
pp. 231-247 ◽  
Author(s):  
Matthias Schnetzer ◽  
Franz Astleithner ◽  
Predrag Cetkovic ◽  
Stefan Humer ◽  
Manuela Lenk ◽  
...  

Abstract This article contributes a framework for the quality assessment of imputations within a broader structure to evaluate the quality of register-based data. Four quality-related hyperdimensions examine the data processing from the raw-data level to the final statistics. Our focus lies on the quality assessment of different imputation steps and their influence on overall data quality. We suggest classification rates as a measure of accuracy of imputation and derive several computational approaches.


Information ◽  
2020 ◽  
Vol 11 (4) ◽  
pp. 175 ◽  
Author(s):  
Tibor Koltay

This paper focuses on the characteristics of research data quality, and aims to cover the most important issues related to it, giving particular attention to its attributes and to data governance. The corporate word’s considerable interest in the quality of data is obvious in several thoughts and issues reported in business-related publications, even if there are apparent differences between values and approaches to data in corporate and in academic (research) environments. The paper also takes into consideration that addressing data quality would be unimaginable without considering big data.


Author(s):  
Catherine Eastwood ◽  
Keith Denny ◽  
Maureen Kelly ◽  
Hude Quan

Theme: Data and Linkage QualityObjectives: To define health data quality from clinical, data science, and health system perspectives To describe some of the international best practices related to quality and how they are being applied to Canada’s administrative health data. To compare methods for health data quality assessment and improvement in Canada (automated logical checks, chart quality indicators, reabstraction studies, coding manager perspectives) To highlight how data linkage can be used to provide new insights into the quality of original data sources To highlight current international initiatives for improving coded data quality including results from current ICD-11 field trials Dr. Keith Denny: Director of Clinical Data Standards and Quality, Canadian Insititute for Health Information (CIHI), Adjunct Research Professor, Carleton University, Ottawa, ON. He provides leadership for CIHI’s information quality initiatives and for the development and application of clinical classifications and terminology standards. Maureen Kelly: Manager of Information Quality at CIHI, Ottawa, ON. She leads CIHI’s corporate quality program that is focused on enhancing the quality of CIHI’s data sources and information products and to fostering CIHI’s quality culture. Dr. Cathy Eastwood: Scientific Manager, Associate Director of Alberta SPOR Methods & Development Platform, Community Health Sciences, Cumming School of Medicine, University of Calgary, Calgary, AB. She has expertise in clinical data collection, evaluation of local and systemic data quality issues, disease classification coding with ICD-10 and ICD-11. Dr. Hude Quan: Professor, Community Health Sciences, Cumming School of Medicine, University of Calgary, Director Alberta SPOR Methods Platform; Co-Chair of Hypertension Canada, Co-Chair of Person to Population Health Collaborative of the Libin Cardiovascular Institute in Calgary, AB. He has expertise in assessing, validating, and linking administrative data sources for conducting data science research including artificial intelligence methods for evaluating and improving data quality. Intended Outcomes:“What is quality health data?” The panel of experts will address this common question by discussing how to define high quality health data, and measures being taken to ensure that they are available in Canada. Optimizing the quality of clinical-administrative data, and their use-value, first requires an understanding of the processes used to create the data. Subsequently, we can address the limitations in data collection and use these data for diverse applications. Current advances in digital data collection are providing more solutions to improve health data quality at lower cost. This panel will describe a number of quality assessment and improvement initiatives aimed at ensuring that health data are fit for a range of secondary uses including data linkage. It will also discuss how the need for the linkage and integration of data sources can influence the views of the data source’s fitness for use. CIHI content will include: Methods for optimizing the value of clinical-administrative data CIHI Information Quality Framework Reabstraction studies (e.g. physician documentation/coders’ experiences) Linkage analytics for data quality University of Calgary content will include: Defining/measuring health data quality Automated methods for quality assessment and improvement ICD-11 features and coding practices Electronic health record initiatives


2015 ◽  
Vol 17 (4) ◽  
pp. 640-661
Author(s):  
Li Chao ◽  
Zhou Hui ◽  
Zhou Xiaofeng

The hydrological data fed to hydrological decision support systems might be untimely, incomplete, inconsistent or illogical due to network congestion, low performance of servers, instrument failures, human errors, etc. It is imperative to assess, monitor and even control the quality of hydrological data residing in or acquired from each link of a hydrological data supply chain. However, the traditional quality management of hydrological data has focused mainly on intrinsic quality problems, such as outlier detection, nullity interpolation, consistency, completeness, etc., and could not be used to assess the quality of application – that is, consumed data in the form of data supply chain and with a granularity of tasks. To achieve these objectives, we first present a methodology to derive quality dimensions from hydrological information system by questionnaire and show the cognitive differences in quality dimension importance, then analyze the correlations between the tasks, classify them into five categories and construct the quality assessment model with time limits in the data supply chain. Exploratory experiments suggest the assessment system can provide data quality (DQ) indicators to DQ assessors, and enable authorized consumers to monitor and even control the quality of data used in an application with a granularity of tasks.


2017 ◽  
Vol 5 (1) ◽  
pp. 47-54
Author(s):  
Puguh Ika Listyorini ◽  
Mursid Raharjo ◽  
Farid Agushybana

Data are the basis to make a decision and policy. The quality of data is going to produce a better policy. The quality assessment methods nowadays do not include all indicators of data quality. If the indicators or assessment criteria in the quality assessment methods are more complete, the level of assessment methods of the data will be higher. The purpose of this study is to develop the method of independent assessment of routine data quality in Surakarta Health Department which is previously performed using the data quality assessment of PMKDR and HMN methods firstly.The design of this study is research and development (R&D) that has been modified into seven steps, namely formulating potential problems, collecting the data, designing the product, validating the design, fixing the design, testing the product, and fixing the product. The subjects consisted of 19 respondents who are managers of data in Surakarta Health Department. Data analysis method used is content analysis.The assessment results show that, in the pilot phase of the development of data quality assessment methods which have been developed, it is basically successful, or it can be used. The results of the assessment of the quality of the data by the developed method is the quality of data collection which is very adequate, the quality of data accuracy which is poor, the quality of data that consistency exists but is inadequate, the quality of the actuality of the data which is very adequate, the quality of periodicity data that is inadequate, the quality of the representation of the data that is very adequate, and sorting the data which is very adequate.It needs a commitment from Surakarta Health Department to take advantage of the development of these methods to assess the quality of data to support the availability of information, decision-making and planning of health programs. It also calls for the development of this research by conducting all stages of the steps of R&D so that the final result of the method development will be better.


2021 ◽  
Author(s):  
Rishabh Deo Pandey ◽  
Itu Snigdh

Abstract Data quality became significant with the emergence of data warehouse systems. While accuracy is intrinsic data quality, validity of data presents a wider perspective, which is more representational and contextual in nature. Through our article we present a different perspective in data collection and collation. We focus on faults experienced in data sets and present validity as a function of allied parameters such as completeness, usability, availability and timeliness for determining the data quality. We also analyze the applicability of these metrics and apply modifications to make it conform to IoT applications. Another major focus of this article is to verify these metrics on aggregated data set instead of separate data values. This work focuses on using the different validation parameters for determining the quality of data generated in a pervasive environment. Analysis approach presented is simple and can be employed to test the validity of collected data, isolate faults in the data set and also measure the suitability of data before applying algorithms for analysis.


Author(s):  
Juliusz L. Kulikowski

For many years the fact that for a high information processing systems’ effectiveness high quality of data is not less important than high systems’ technological performance was not widely understood and accepted. The way to understanding the complexity of data quality notion was also long, as it will be shown below. However, a progress in modern information processing systems development is not possible without improvement of data quality assessment and control methods. Data quality is closely connected both with data form and value of information carried by the data. High-quality data can be understood as data having an appropriate form and containing valuable information. Therefore, at least two aspects of data are reflected in this notion: 1st - technical facility of data processing, and 2nd - usefulness of information supplied by the data in education, science, decision making, etc.


10.2196/27842 ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. e27842
Author(s):  
Hannelore Aerts ◽  
Dipak Kalra ◽  
Carlos Sáez ◽  
Juan Manuel Ramírez-Anguita ◽  
Miguel-Angel Mayer ◽  
...  

Background There is increasing recognition that health care providers need to focus attention, and be judged against, the impact they have on the health outcomes experienced by patients. The measurement of health outcomes as a routine part of clinical documentation is probably the only scalable way of collecting outcomes evidence, since secondary data collection is expensive and error-prone. However, there is uncertainty about whether routinely collected clinical data within electronic health record (EHR) systems includes the data most relevant to measuring and comparing outcomes and if those items are collected to a good enough data quality to be relied upon for outcomes assessment, since several studies have pointed out significant issues regarding EHR data availability and quality. Objective In this paper, we first describe a practical approach to data quality assessment of health outcomes, based on a literature review of existing frameworks for quality assessment of health data and multistakeholder consultation. Adopting this approach, we performed a pilot study on a subset of 21 International Consortium for Health Outcomes Measurement (ICHOM) outcomes data items from patients with congestive heart failure. Methods All available registries compatible with the diagnosis of heart failure within an EHR data repository of a general hospital (142,345 visits and 12,503 patients) were extracted and mapped to the ICHOM format. We focused our pilot assessment on 5 commonly used data quality dimensions: completeness, correctness, consistency, uniqueness, and temporal stability. Results We found high scores (>95%) for the consistency, completeness, and uniqueness dimensions. Temporal stability analyses showed some changes over time in the reported use of medication to treat heart failure, as well as in the recording of past medical conditions. Finally, the investigation of data correctness suggested several issues concerning the characterization of missing data values. Many of these issues appear to be introduced while mapping the IMASIS-2 relational database contents to the ICHOM format, as the latter requires a level of detail that is not explicitly available in the coded data of an EHR. Conclusions Overall, results of this pilot study revealed good data quality for the subset of heart failure outcomes collected at the Hospital del Mar. Nevertheless, some important data errors were identified that were caused by fundamentally different data collection practices in routine clinical care versus research, for which the ICHOM standard set was originally developed. To truly examine to what extent hospitals today are able to routinely collect the evidence of their success in achieving good health outcomes, future research would benefit from performing more extensive data quality assessments, including all data items from the ICHOM standards set and across multiple hospitals.


Sign in / Sign up

Export Citation Format

Share Document