Data Quality Assessment

Author(s):  
Juliusz L. Kulikowski

For many years the idea that for high information processing systems effectiveness, high quality of data is not less important than the systems’ technological perfection was not widely understood and accepted. The way to understanding the complexity of the data quality notion was also long, as will be shown in this paper. However, progress in modern information processing systems development is not possible without improvement of data quality assessment and control methods. Data quality is closely connected both with data form and value of information carried by the data. High-quality data can be understood as data having an appropriate form and containing valuable information. Therefore, at least two aspects of data are reflected in this notion: (1) technical facility of data processing and (2) usefulness of information supplied by the data in education, science, decision making, etc.

Author(s):  
Juliusz L. Kulikowski

For many years the fact that for a high information processing systems’ effectiveness high quality of data is not less important than high systems’ technological performance was not widely understood and accepted. The way to understanding the complexity of data quality notion was also long, as it will be shown below. However, a progress in modern information processing systems development is not possible without improvement of data quality assessment and control methods. Data quality is closely connected both with data form and value of information carried by the data. High-quality data can be understood as data having an appropriate form and containing valuable information. Therefore, at least two aspects of data are reflected in this notion: 1st - technical facility of data processing, and 2nd - usefulness of information supplied by the data in education, science, decision making, etc.


2018 ◽  
Vol 7 (4) ◽  
pp. e000353 ◽  
Author(s):  
Luke A Turcotte ◽  
Jake Tran ◽  
Joshua Moralejo ◽  
Nancy Curtin-Telegdi ◽  
Leslie Eckel ◽  
...  

BackgroundHealth information systems with applications in patient care planning and decision support depend on high-quality data. A postacute care hospital in Ontario, Canada, conducted data quality assessment and focus group interviews to guide the development of a cross-disciplinary training programme to reimplement the Resident Assessment Instrument–Minimum Data Set (RAI-MDS) 2.0 comprehensive health assessment into the hospital’s clinical workflows.MethodsA hospital-level data quality assessment framework based on time series comparisons against an aggregate of Ontario postacute care hospitals was used to identify areas of concern. Focus groups were used to evaluate assessment practices and the use of health information in care planning and clinical decision support. The data quality assessment and focus groups were repeated to evaluate the effectiveness of the training programme.ResultsInitial data quality assessment and focus group indicated that knowledge, practice and cultural barriers prevented both the collection and use of high-quality clinical data. Following the implementation of the training, there was an improvement in both data quality and the culture surrounding the RAI-MDS 2.0 assessment.ConclusionsIt is important for facilities to evaluate the quality of their health information to ensure that it is suitable for decision-making purposes. This study demonstrates the use of a data quality assessment framework that can be applied for quality improvement planning.


2009 ◽  
Vol 419-420 ◽  
pp. 445-448 ◽  
Author(s):  
Jun Ting Cheng ◽  
Wei Ling Zhao ◽  
Can Zhao ◽  
Xue Dong Xie

In the field of reverse engineering, data quality assessment is a very important work in the detection, the result of data quality assessment will directly or indirectly affect the detection and the following manufacturing process quality. Data quality assessment can be used in the camera calibration, the model and model reconstruction comparison, and so on. In this paper, on the basis of the existing method of calculating each point error, and multipurpose use of average and standard error and some other concepts of mathematical statistics, and then improve a novel and simple calculating error method. This method is applicable to many groups of one-to-one ideal data and the measured data comparison, and it can be more intuitive to reflect the error of overall data, as well as the error distribution, and it can be more efficient to determine the measured data is reasonable or not. In this paper, the data point quality which is collected in the reverse engineering is assessed, and it can see that the method which is proposed in this article has some advantages in the data point quality assessment field.


Sensors ◽  
2019 ◽  
Vol 19 (13) ◽  
pp. 2927
Author(s):  
Zihao Shao ◽  
Huiqiang Wang ◽  
Guangsheng Feng

Mobile crowdsensing (MCS) is a way to use social resources to solve high-precision environmental awareness problems in real time. Publishers hope to collect as much sensed data as possible at a relatively low cost, while users want to earn more revenue at a low cost. Low-quality data will reduce the efficiency of MCS and lead to a loss of revenue. However, existing work lacks research on the selection of user revenue under the premise of ensuring data quality. In this paper, we propose a Publisher-User Evolutionary Game Model (PUEGM) and a revenue selection method to solve the evolutionary stable equilibrium problem based on non-cooperative evolutionary game theory. Firstly, the choice of user revenue is modeled as a Publisher-User Evolutionary Game Model. Secondly, based on the error-elimination decision theory, we combine a data quality assessment algorithm in the PUEGM, which aims to remove low-quality data and improve the overall quality of user data. Finally, the optimal user revenue strategy under different conditions is obtained from the evolutionary stability strategy (ESS) solution and stability analysis. In order to verify the efficiency of the proposed solutions, extensive experiments using some real data sets are conducted. The experimental results demonstrate that our proposed method has high accuracy of data quality assessment and a reasonable selection of user revenue.


2017 ◽  
Vol 9 (1) ◽  
Author(s):  
Sophia Crossen

ObjectiveTo explore the quality of data submitted once a facility is movedinto an ongoing submission status and address the importance ofcontinuing data quality assessments.IntroductionOnce a facility meets data quality standards and is approved forproduction, an assumption is made that the quality of data receivedremains at the same level. When looking at production data qualityreports from various states generated using a SAS data qualityprogram, a need for production data quality assessment was identified.By implementing a periodic data quality update on all productionfacilities, data quality has improved for production data as a whole andfor individual facility data. Through this activity several root causesof data quality degradation have been identified, allowing processesto be implemented in order to mitigate impact on data quality.MethodsMany jurisdictions work with facilities during the onboardingprocess to improve data quality. Once a certain level of data qualityis achieved, the facility is moved into production. At this point thejurisdiction generally assumes that the quality of the data beingsubmitted will remain fairly constant. To check this assumption inKansas, a SAS Production Report program was developed specificallyto look at production data quality.A legacy data set is downloaded from BioSense production serversby Earliest Date in order to capture all records for visits which occurredwithin a specified time frame. This data set is then run through a SASdata quality program which checks specific fields for completenessand validity and prints a report on counts and percentages of null andinvalid values, outdated records, and timeliness of record submission,as well as examples of records from visits containing these errors.A report is created for the state as a whole, each facility, EHR vendor,and HIE sending data to the production servers, with examplesprovided only by facility. The facility, vendor, and HIE reportsinclude state percentages of errors for comparison.The Production Report was initially run on Kansas data for thefirst quarter of 2016 followed by consultations with facilities on thefindings. Monthly checks were made of data quality before and afterfacilities implemented changes. An examination of Kansas’ resultsshowed a marked decrease in data quality for many facilities. Everyfacility had at least one area in need of improvement.The data quality reports and examples were sent to every facilitysending production data during the first quarter attached to an emailrequesting a 30-60 minute call with each to go over the report. Thiscall was deemed crucial to the process since it had been over a year,and in a few cases over two years, since some of the facilities hadlooked at data quality and would need a review of the findings andall requirements, new and old. Ultimately, over half of all productionfacilities scheduled a follow-up call.While some facilities expressed some degree of trepidation, mostfacilities were open to revisiting data quality and to making requestedimprovements. Reasons for data quality degradation included updatesto EHR products, change of EHR product, work flow issues, engineupdates, new requirements, and personnel turnover.A request was made of other jurisdictions (including Arizona,Nevada, and Illinois) to look at their production data using the sameprogram and compare quality. Data was pulled for at least one weekof July 2016 by Earliest Date.ResultsMonthly reports have been run on Kansas Production data bothbefore and after the consultation meetings which indicate a markedimprovement in both completeness of required fields and validityof values in those fields. Data for these monthly reports was againselected by Earliest Date.ConclusionsIn order to ensure production data continues to be of value forsyndromic surveillance purposes, periodic data quality assessmentsshould continue after a facility reaches ongoing submission status.Alterations in process include a review of production data at leasttwice per year with a follow up data review one month later to confirmadjustments have been correctly implemented.


2018 ◽  
Vol 12 (1) ◽  
pp. 19-32 ◽  
Author(s):  
Mehrnaz Mashoufi ◽  
Haleh Ayatollahi ◽  
Davoud Khorasani-Zavareh

Introduction:Data quality is an important issue in emergency medicine. The unique characteristics of emergency care services, such as high turn-over and the speed of work may increase the possibility of making errors in the related settings. Therefore, regular data quality assessment is necessary to avoid the consequences of low quality data. This study aimed to identify the main dimensions of data quality which had been assessed, the assessment approaches, and generally, the status of data quality in the emergency medical services.Methods:The review was conducted in 2016. Related articles were identified by searching databases, including Scopus, Science Direct, PubMed and Web of Science. All of the review and research papers related to data quality assessment in the emergency care services and published between 2000 and 2015 (n=34) were included in the study.Results:The findings showed that the five dimensions of data quality; namely, data completeness, accuracy, consistency, accessibility, and timeliness had been investigated in the field of emergency medical services. Regarding the assessment methods, quantitative research methods were used more than the qualitative or the mixed methods. Overall, the results of these studies showed that data completeness and data accuracy requires more attention to be improved.Conclusion:In the future studies, choosing a clear and a consistent definition of data quality is required. Moreover, the use of qualitative research methods or the mixed methods is suggested, as data users’ perspectives can provide a broader picture of the reasons for poor quality data.


2017 ◽  
Vol 5 (1) ◽  
pp. 47-54
Author(s):  
Puguh Ika Listyorini ◽  
Mursid Raharjo ◽  
Farid Agushybana

Data are the basis to make a decision and policy. The quality of data is going to produce a better policy. The quality assessment methods nowadays do not include all indicators of data quality. If the indicators or assessment criteria in the quality assessment methods are more complete, the level of assessment methods of the data will be higher. The purpose of this study is to develop the method of independent assessment of routine data quality in Surakarta Health Department which is previously performed using the data quality assessment of PMKDR and HMN methods firstly.The design of this study is research and development (R&D) that has been modified into seven steps, namely formulating potential problems, collecting the data, designing the product, validating the design, fixing the design, testing the product, and fixing the product. The subjects consisted of 19 respondents who are managers of data in Surakarta Health Department. Data analysis method used is content analysis.The assessment results show that, in the pilot phase of the development of data quality assessment methods which have been developed, it is basically successful, or it can be used. The results of the assessment of the quality of the data by the developed method is the quality of data collection which is very adequate, the quality of data accuracy which is poor, the quality of data that consistency exists but is inadequate, the quality of the actuality of the data which is very adequate, the quality of periodicity data that is inadequate, the quality of the representation of the data that is very adequate, and sorting the data which is very adequate.It needs a commitment from Surakarta Health Department to take advantage of the development of these methods to assess the quality of data to support the availability of information, decision-making and planning of health programs. It also calls for the development of this research by conducting all stages of the steps of R&D so that the final result of the method development will be better.


High Quality Data are the precondition for examining and making use of enormous facts and for making sure the estimation of the facts. As of now, far reaching exam and research of price gauges and satisfactory appraisal strategies for massive records are inadequate. To begin with, this paper abridges audits of Data excellent studies. Second, this paper examines the records attributes of the enormous records condition, presents high-quality difficulties appeared by large data, and defines a progressive facts exceptional shape from the point of view of records clients. This system accommodates of big records best measurements, best attributes, and best files. At long last, primarily based on this system, this paper builds a dynamic appraisal technique for records exceptional. This technique has excellent expansibility and versatility and can address the troubles of enormous facts fine appraisal. A few explores have verified that preserving up the character of statistics is regularly recognized as hazardous, however at the equal time is considered as simple to effective basic leadership in building aid the executives. Enormous data sources are exceptionally wide and statistics structures are thoughts boggling. The facts got may additionally have satisfactory troubles, for example, facts mistakes, lacking data, irregularities, commotion, and so forth. The motivation behind facts cleansing (facts scouring) is to pick out and expel mistakes and irregularities from facts so as to decorate their quality. Information cleansing may be separated into 4 examples dependent on usage techniques and degrees manual execution, composing of splendid software programs, records cleaning inconsequential to specific software fields, and taking care of the difficulty of a kind of explicit software area. In these 4 methodologies, the 1/3 has terrific down to earth esteem and may be connected effectively.


2017 ◽  
Vol 4 (1) ◽  
pp. 25-31 ◽  
Author(s):  
Diana Effendi

Information Product Approach (IP Approach) is an information management approach. It can be used to manage product information and data quality analysis. IP-Map can be used by organizations to facilitate the management of knowledge in collecting, storing, maintaining, and using the data in an organized. The  process of data management of academic activities in X University has not yet used the IP approach. X University has not given attention to the management of information quality of its. During this time X University just concern to system applications used to support the automation of data management in the process of academic activities. IP-Map that made in this paper can be used as a basis for analyzing the quality of data and information. By the IP-MAP, X University is expected to know which parts of the process that need improvement in the quality of data and information management.   Index term: IP Approach, IP-Map, information quality, data quality. REFERENCES[1] H. Zhu, S. Madnick, Y. Lee, and R. Wang, “Data and Information Quality Research: Its Evolution and Future,” Working Paper, MIT, USA, 2012.[2] Lee, Yang W; at al, Journey To Data Quality, MIT Press: Cambridge, 2006.[3] L. Al-Hakim, Information Quality Management: Theory and Applications. Idea Group Inc (IGI), 2007.[4] “Access : A semiotic information quality framework: development and comparative analysis : Journal ofInformation Technology.” [Online]. Available: http://www.palgravejournals.com/jit/journal/v20/n2/full/2000038a.html. [Accessed: 18-Sep-2015].[5] Effendi, Diana, Pengukuran Dan Perbaikan Kualitas Data Dan Informasi Di Perguruan Tinggi MenggunakanCALDEA Dan EVAMECAL (Studi Kasus X University), Proceeding Seminar Nasional RESASTEK, 2012, pp.TIG.1-TI-G.6.


Sign in / Sign up

Export Citation Format

Share Document