scholarly journals A Data Quality Control Program for Computer-Assisted Personal Interviews

2012 ◽  
Vol 2012 ◽  
pp. 1-8 ◽  
Author(s):  
Janet E. Squires ◽  
Alison M. Hutchinson ◽  
Anne-Marie Bostrom ◽  
Kelly Deis ◽  
Peter G. Norton ◽  
...  

Researchers strive to optimize data quality in order to ensure that study findings are valid and reliable. In this paper, we describe a data quality control program designed to maximize quality of survey data collected using computer-assisted personal interviews. The quality control program comprised three phases: (1) software development, (2) an interviewer quality control protocol, and (3) a data cleaning and processing protocol. To illustrate the value of the program, we assess its use in the Translating Research in Elder Care Study. We utilize data collected annually for two years from computer-assisted personal interviews with 3004 healthcare aides. Data quality was assessed using both survey and process data. Missing data and data errors were minimal. Mean and median values and standard deviations were within acceptable limits. Process data indicated that in only 3.4% and 4.0% of cases was the interviewer unable to conduct interviews in accordance with the details of the program. Interviewers’ perceptions of interview quality also significantly improved between Years 1 and 2. While this data quality control program was demanding in terms of time and resources, we found that the benefits clearly outweighed the effort required to achieve high-quality data.

2017 ◽  
Vol 6 (2) ◽  
pp. 505-521 ◽  
Author(s):  
Luděk Vecsey ◽  
Jaroslava Plomerová ◽  
Petr Jedlička ◽  
Helena Munzarová ◽  
Vladislav Babuška ◽  
...  

Abstract. This paper focuses on major issues related to the data reliability and network performance of 20 broadband (BB) stations of the Czech (CZ) MOBNET (MOBile NETwork) seismic pool within the AlpArray seismic experiments. Currently used high-resolution seismological applications require high-quality data recorded for a sufficiently long time interval at seismological observatories and during the entire time of operation of the temporary stations. In this paper we present new hardware and software tools we have been developing during the last two decades while analysing data from several international passive experiments. The new tools help to assure the high-quality standard of broadband seismic data and eliminate potential errors before supplying data to seismological centres. Special attention is paid to crucial issues like the detection of sensor misorientation, timing problems, interchange of record components and/or their polarity reversal, sensor mass centring, or anomalous channel amplitudes due to, for example, imperfect gain. Thorough data quality control should represent an integral constituent of seismic data recording, preprocessing, and archiving, especially for data from temporary stations in passive seismic experiments. Large international seismic experiments require enormous efforts from scientists from different countries and institutions to gather hundreds of stations to be deployed in the field during a limited time period. In this paper, we demonstrate the beneficial effects of the procedures we have developed for acquiring a reliable large set of high-quality data from each group participating in field experiments. The presented tools can be applied manually or automatically on data from any seismic network.


2015 ◽  
Vol 76 ◽  
pp. 96-111 ◽  
Author(s):  
A.T. Ringler ◽  
M.T. Hagerty ◽  
J. Holland ◽  
A. Gonzales ◽  
L.S. Gee ◽  
...  

2017 ◽  
Vol 46 (2) ◽  
pp. 69-77 ◽  
Author(s):  
Beth A Reid ◽  
Lee Ridoutt ◽  
Paul O’Connor ◽  
Deirdre Murphy

Introduction: This article presents some of the results of a year-long project in the Republic of Ireland to review the quality of the hospital inpatient enquiry data for its use in activity-based funding (ABF). This is the first of two papers regarding best practice in the management of clinical coding services. Methods: Four methods were used to address this aspect of the project, namely a literature review, a workshop, an assessment of the coding services in 12 Irish hospitals by structured interviews of the clinical coding managers, and a medical record audit of the clinical codes in 10 hospitals. Results: The results included here are those relating to the quality of the medical records, coding work allocation and supervision processes, data quality control measures, communication with clinicians, and the visibility of clinical coders, their managers, and the coding service. Conclusion: The project found instances of best practice in the study hospitals but also found several areas needing improvement. These included improving the structure and content of the medical record, clinician engagement with the clinical coding teams and the ABF process, and the use of data quality control measures.


2010 ◽  
Vol 2 (3) ◽  
pp. 135-141
Author(s):  
Narsito Narsito

Abstract                                                             This paper deals with some practical problems related to the quality of analytical chemical data usually met in practice. Special attention is given to the topic of quality control in analytical chemistry, since analytical data is one of the primary information from which some important scientifically based decision are to be made. The present paper starts with brief description on some fundamental aspects associated with quality of analytical data, such as sources of variation of analytical data, criteria for quality of analytical method, quality assurance in chemical analysis. The assessment of quality parameter for analytical method like the use of standard materials as well as standard methods is given. Concerning with the quality control of analytical data, the use of several techniques, such as control samples and control charts, in monitoring analytical data in quality control program are described qualitatively.  In the final part of this paper, some important remarks for the preparation of collaborative trials, including the evaluation of accuracy and reproducibility of analytical method are also given Keywords: collaborative trials, quality control, analytical data Abstract                                                             This paper deals with some practical problems related to the quality of analytical chemical data usually met in practice. Special attention is given to the topic of quality control in analytical chemistry, since analytical data is one of the primary information from which some important scientifically based decision are to be made. The present paper starts with brief description on some fundamental aspects associated with quality of analytical data, such as sources of variation of analytical data, criteria for quality of analytical method, quality assurance in chemical analysis. The assessment of quality parameter for analytical method like the use of standard materials as well as standard methods is given. Concerning with the quality control of analytical data, the use of several techniques, such as control samples and control charts, in monitoring analytical data in quality control program are described qualitatively.  In the final part of this paper, some important remarks for the preparation of collaborative trials, including the evaluation of accuracy and reproducibility of analytical method are also given Keywords: collaborative trials, quality control, analytical data


2021 ◽  
Author(s):  
Francesco Battocchio ◽  
Jaijith Sreekantan ◽  
Arghad Arnaout ◽  
Abed Benaichouche ◽  
Juma Sulaiman Al Shamsi ◽  
...  

Abstract Drilling data quality is notoriously a challenge for any analytics application, due to complexity of the real-time data acquisition system which routinely generates: (i) Time related issues caused by irregular sampling, (ii) Channel related issues in terms of non-uniform names and units, missing or wrong values, and (iii) Depth related issues caused block position resets, and depth compensation (for floating rigs). On the other hand, artificial intelligence drilling applications typically require a consistent stream of high-quality data as an input for their algorithms, as well as for visualization. In this work we present an automated workflow enhanced by data driven techniques that resolves complex quality issues, harmonize sensor drilling data, and report the quality of the dataset to be used for advanced analytics. The approach proposes an automated data quality workflow which formalizes the characteristics, requirements and constraints of sensor data within the context of drilling operations. The workflow leverages machine learning algorithms, statistics, signal processing and rule-based engines for detection of data quality issues including error values, outliers, bias, drifts, noise, and missing values. Further, once data quality issues are classified, they are scored and treated on a context specific basis in order to recover the maximum volume of data while avoiding information loss. This results into a data quality and preparation engine that organizes drilling data for further advanced analytics, and reports the quality of the dataset through key performance indicators. This novel data processing workflow allowed to recover more than 90% of a drilling dataset made of 18 offshore wells, that otherwise could not be used for analytics. This was achieved by resolving specific issues including, resampling timeseries with gaps and different sampling rates, smart imputation of wrong/missing data while preserving consistency of dataset across all channels. Additional improvement would include recovering data values that felt outside a meaningful range because of sensor drifting or depth resets. The present work automates the end-to-end workflow for data quality control of drilling sensor data leveraging advanced Artificial Intelligence (AI) algorithms. It allows to detect and classify patterns of wrong/missing data, and to recover them through a context driven approach that prevents information loss. As a result, the maximum amount of data is recovered for artificial intelligence drilling applications. The workflow also enables optimal time synchronization of different sensors streaming data at different frequencies, within discontinuous time intervals.


2021 ◽  
pp. 193896552110254
Author(s):  
Lu Lu ◽  
Nathan Neale ◽  
Nathaniel D. Line ◽  
Mark Bonn

As the use of Amazon’s Mechanical Turk (MTurk) has increased among social science researchers, so, too, has research into the merits and drawbacks of the platform. However, while many endeavors have sought to address issues such as generalizability, the attentiveness of workers, and the quality of the associated data, there has been relatively less effort concentrated on integrating the various strategies that can be used to generate high-quality data using MTurk samples. Accordingly, the purpose of this research is twofold. First, existing studies are integrated into a set of strategies/best practices that can be used to maximize MTurk data quality. Second, focusing on task setup, selected platform-level strategies that have received relatively less attention in previous research are empirically tested to further enhance the contribution of the proposed best practices for MTurk usage.


Author(s):  
C. X. Chen ◽  
H. Zhang ◽  
K. Jiang ◽  
H. T. Zhao ◽  
W. Xie ◽  
...  

Abstract. In recent years, China has promulgated the "Civil Code of the People's Republic of China", "Implementation Rules of the Provisional Regulations on Real Estate Registration" and other laws and regulations, which have protected citizens' rights and obligations in real estate from the legal system. It shows that the quality of real estate registration data is very important. At present, there is no set of standards for evaluating the quality of real estate registration data. This article sorts out the production process of real estate registration data and focuses on the four stages of production: digitization results, field surveys and surveying and mapping results, group building results, integration and association. As a result, the main points of real estate registration data quality control were put forward, and a quality evaluation model was developed. Taking Beijing's real estate registration historical archives integrated data quality inspection as an application case, it shows that the quality evaluation model has been successfully applied to actual projects, ensuring the quality of Beijing real estate registration data. It also provides a reference for the next step in China's quality control of the unified registration of natural resources confirmation.


Sensors ◽  
2019 ◽  
Vol 19 (9) ◽  
pp. 1978 ◽  
Author(s):  
Argyro Mavrogiorgou ◽  
Athanasios Kiourtis ◽  
Konstantinos Perakis ◽  
Stamatios Pitsios ◽  
Dimosthenis Kyriazis

It is an undeniable fact that Internet of Things (IoT) technologies have become a milestone advancement in the digital healthcare domain, since the number of IoT medical devices is grown exponentially, and it is now anticipated that by 2020 there will be over 161 million of them connected worldwide. Therefore, in an era of continuous growth, IoT healthcare faces various challenges, such as the collection, the quality estimation, as well as the interpretation and the harmonization of the data that derive from the existing huge amounts of heterogeneous IoT medical devices. Even though various approaches have been developed so far for solving each one of these challenges, none of these proposes a holistic approach for successfully achieving data interoperability between high-quality data that derive from heterogeneous devices. For that reason, in this manuscript a mechanism is produced for effectively addressing the intersection of these challenges. Through this mechanism, initially, the collection of the different devices’ datasets occurs, followed by the cleaning of them. In sequel, the produced cleaning results are used in order to capture the levels of the overall data quality of each dataset, in combination with the measurements of the availability of each device that produced each dataset, and the reliability of it. Consequently, only the high-quality data is kept and translated into a common format, being able to be used for further utilization. The proposed mechanism is evaluated through a specific scenario, producing reliable results, achieving data interoperability of 100% accuracy, and data quality of more than 90% accuracy.


2018 ◽  
Vol 13 (2) ◽  
pp. 131-146
Author(s):  
Mirwan Rofiq Ginanjar ◽  
Sri Mulat Yuningsih

Planning and management of water resources are dependent on the quality of hydrological data. Hydrological data plays an important role in hydrological analysis. The availability of good and qualified hydrological data is one of the determinants of the results of hydrological analysis. However, the facts indicate that many of the available data do not fit their ideal state. To solve this problem, a hydrological data quality control model should be established in order to improve the quality of national hydrological data. The scope includes quality control of rainfall and discharge data. Analysis of the quality control of rainfall data was conducted on 58 rainfall stations spread on the island of Java. The analysis shows that 41 stations are good categorized, 14 stations are in moderate category and 3 stations are badly categorized. Based on these results, a light improvement scenario was performed, good category Station increased to 46 stations, moderate category decreased to 11 stations and bad category reduced to 1 Stations. Quality control of discharge data analysis was conducted on 14 discharge stations spread on Java Island. Analyzes were performed for QC1, QC2 and QC3 then got final QC value. The results on the final QC show no stations for good category, 2 stations for moderate categories and 12 stations for bad category. Based on the results of the analysis, a light improvement scenario was performed with the result of bad category increased to good category 5 stations, bad category increased to moderate 7 stations, and moderate category 1 stations.


Sign in / Sign up

Export Citation Format

Share Document