An Assessment of the Quality of Data From the EPI for Knowledge Management in Healthcare in Ghana

Author(s):  
Patrick Ohemeng Gyaase ◽  
Joseph Tei Boye-Doe ◽  
Christiana Okantey

Quality data from the Expanded Immunization Programme (EPI), which is pivotal in reducing infant mortalities globally, is critical for knowledge management on the EPI. This chapter assesses the quality of data from the EPI for the six childhood killer diseases from the EPI tally books, monthly reports, and the District Health Information Management System (DHIMS II) using the Data Quality Self-Assessment (DQS) tool of WHO. The study found high availability and completeness of data in the EPI tally books and the monthly EPI reports. The accuracy and currency of data on all antigens from EPI tally books compared to reported number issued were comparatively low. The composite quality index of the data from the EPI is thus low, an indication poor supervision of the EPI programme in the health facilities. There is therefore, the need for effective monitoring and data validation at the point of collection and entry to improve the data quality for knowledge management on the EPI programme.

2017 ◽  
Vol 4 (1) ◽  
pp. 25-31 ◽  
Author(s):  
Diana Effendi

Information Product Approach (IP Approach) is an information management approach. It can be used to manage product information and data quality analysis. IP-Map can be used by organizations to facilitate the management of knowledge in collecting, storing, maintaining, and using the data in an organized. The  process of data management of academic activities in X University has not yet used the IP approach. X University has not given attention to the management of information quality of its. During this time X University just concern to system applications used to support the automation of data management in the process of academic activities. IP-Map that made in this paper can be used as a basis for analyzing the quality of data and information. By the IP-MAP, X University is expected to know which parts of the process that need improvement in the quality of data and information management.   Index term: IP Approach, IP-Map, information quality, data quality. REFERENCES[1] H. Zhu, S. Madnick, Y. Lee, and R. Wang, “Data and Information Quality Research: Its Evolution and Future,” Working Paper, MIT, USA, 2012.[2] Lee, Yang W; at al, Journey To Data Quality, MIT Press: Cambridge, 2006.[3] L. Al-Hakim, Information Quality Management: Theory and Applications. Idea Group Inc (IGI), 2007.[4] “Access : A semiotic information quality framework: development and comparative analysis : Journal ofInformation Technology.” [Online]. Available: http://www.palgravejournals.com/jit/journal/v20/n2/full/2000038a.html. [Accessed: 18-Sep-2015].[5] Effendi, Diana, Pengukuran Dan Perbaikan Kualitas Data Dan Informasi Di Perguruan Tinggi MenggunakanCALDEA Dan EVAMECAL (Studi Kasus X University), Proceeding Seminar Nasional RESASTEK, 2012, pp.TIG.1-TI-G.6.


2020 ◽  
pp. 089443932092824 ◽  
Author(s):  
Michael J. Stern ◽  
Erin Fordyce ◽  
Rachel Carpenter ◽  
Melissa Heim Viox ◽  
Stuart Michaels ◽  
...  

Social media recruitment is no longer an uncharted avenue for survey research. The results thus far provide evidence of an engaging means of recruiting hard-to-reach populations. Questions remain, however, regarding whether the data collected using this method of recruitment produce quality data. This article assesses one aspect that may influence the quality of data gathered through nonprobability sampling using social media advertisements for a hard-to-reach sexual and gender minority youth population: recruitment design formats. The data come from the Survey of Today’s Adolescent Relationships and Transitions, which used a variety of forms of advertisements as survey recruitment tools on Facebook, Instagram, and Snapchat. Results demonstrate that design decisions such as the format of the advertisement (e.g., video or static) and the use of eligibility language on the advertisements impact the quality of the data as measured by break-off rates and the use of nonsubstantive responses. Additionally, the type of device used affected the measures of data quality.


Author(s):  
Benjamin Ngugi ◽  
Jafar Mana ◽  
Lydia Segal

As the nation confronts a growing tide of security breaches, the importance of having quality data breach information systems becomes paramount. Yet too little attention is paid to evaluating these systems. This article draws on data quality scholarship to develop a yardstick that assesses the quality of data breach notification systems in the U.S. at both the state and national levels from the perspective of key stakeholders, who include law enforcement agencies, consumers, shareholders, investors, researchers, and businesses that sell security products. Findings reveal major shortcomings that reduce the value of data breach information to these stakeholders. The study concludes with detailed recommendations for reform.


Author(s):  
Arun Thotapalli Sundararaman

DQ/IQ measurement in general and in the specific context of BI has always been a topic of high interest for researchers. The topic of Data Quality (DQ) in the field of Information Management has been well researched, published, and studied. Despite such research advances, there has been very little understanding either from a theoretical or from a practical perspective of DQ/IQ measurement for BI. Assessing the quality of data for a BI System has been one of the major challenges for researchers as well as practitioners, leading to the need for frameworks to measure DQ for BI. The objective of this chapter is to provide an overview of the existing frameworks for measurement of DQ for BI, analyze the gaps therein, review proposed solutions, and provide a direction for future research and practice in this area.


2017 ◽  
Vol 7 (2) ◽  
pp. 88
Author(s):  
Ahmad Fahmi Karami

Organizational performance depends on strategic decisions taken by stakeholders in the organization, where strategic decisions of stakeholders depend on the quality of data and information available to the organization. Data and information quality called good when the data and quality has criteria that suits for users of data and information, where data and information user need on the organization will be different according to their aim and objectives, so that the criteria of data quality and information is not universal. The research aims to improve the quality management of data and information by utilizing information systems to produce good quality data and information and help improve the organization's performance on the Palm Oil Processing Factory in Indonesia. This research was conducted to know data and information quality management in producing data and information, and its contribution on the mill performance using interview methods with those who have a role in the implementation of data quality and information management, observation, and document management related to factory performance. This research resulted findings that still in the implementation of data quality and information management there are still procedures that are not undertaken, so the result of data and information not entirely suits with the user wishes. Although the procedure has not been fully implemented, using data and information production has helped data and information users in decision making and succeeded in lowering the mill breakdown by 0.10%.


2016 ◽  
Vol 7 (4) ◽  
Author(s):  
Nahrun Hartono ◽  
Ema Utami ◽  
Armadyah Amborowati

Abstract. Information Management System of Cokroaminoto Palopo University (SIMUNCP) is a web application implemented on a Local Area Network (LAN). SIMUNCP uses MySQL as its database. The data is moved from the old database as a source to postgreeSQL as a target by migration. The migration is done because of lack of features on the old database that uses MySQL could not meet the needs of theorganization. Before the migration, the first process is performed to evaluate the existing errors in the old database and the evaluation results are then used as a reference to design the new database. After the data migration is done the next process is measuring the quality of data on the new database. The quality of the data measured is an aspect of accuracy and nonduplicate aspect. Once that is done the next is to do is optimizing the query, Optimized query is a query that exists in the source code of application SIMUNCP.Keywords: Migration, Database, OptimizationAbstrak. Sistem Informasi Manajemen Univeritas Cokroaminoto Palopo (SIMUNCP) merupakan aplikasi web yang diimplementasikan pada jaringan Local Area Network (LAN). SIMUNCP menggunakan MySQL sebagai basis datanya. Migrasi dilakukan dengan memindahkan data dari basis data lama yang menjadi sumber ke basis data postgreSQL sebagai basis data baru menjadi, hal ini dikarenakan minimnya fitur pada basis data lama yang menggunakan MySQL sehingga tidak mampu memenuhi kebutuhan organisasi. Sebelum dilakukan migrasi, yang dilakukan adalah mengevaluasi kesalahankesalahan yang ada pada basis data lama dan hasil evaluasi tersebut kemudian dijadikan acuan untuk merancang basis data baru. setelah migrasi data dilakukan selanjutnya adalah melakukan pengukuran kualitas data pada basis data baru, kualitas data yang diukur adalah aspek akurasi dan aspek nonduplikat, setelah itu dilakukan optimasi query, dimana query-query yang dioptimasi adalah query-query yang ada pada source code aplikasi SIMUNCP.Kata Kunci: Migrasi, Basis data, Optimalisasi.


Author(s):  
Nishita Shewale

Abstract: To introduce unified information systems, this will provide different establishments with an insight on how data related activities take place and there results with assured quality. Considering data accumulation, replication, missing entities, incorrect formatting, anomalies etc. can come to light in the collection of data in different information systems, which can cause an array of adverse effects on data quality, the subject of data quality should be treated with better results. This paper inspects the data quality problems in information systems and introduces the new techniques that enable organizations to improve their quality of data. Keywords: Information Systems (IS), Data Quality, Data Cleaning, Data Profiling, Standardization, Database, Organization


2020 ◽  
Author(s):  
Cristina Costa-Santos ◽  
Ana Luísa Neves ◽  
Ricardo Correia ◽  
Paulo Santos ◽  
Matilde Monteiro-Soares ◽  
...  

AbstractBackgroundHigh-quality data is crucial for guiding decision making and practicing evidence-based healthcare, especially if previous knowledge is lacking. Nevertheless, data quality frailties have been exposed worldwide during the current COVID-19 pandemic. Focusing on a major Portuguese surveillance dataset, our study aims to assess data quality issues and suggest possible solutions.MethodsOn April 27th 2020, the Portuguese Directorate-General of Health (DGS) made available a dataset (DGSApril) for researchers, upon request. On August 4th, an updated dataset (DGSAugust) was also obtained. The quality of data was assessed through analysis of data completeness and consistency between both datasets.ResultsDGSAugust has not followed the data format and variables as DGSApril and a significant number of missing data and inconsistencies were found (e.g. 4,075 cases from the DGSApril were apparently not included in DGSAugust). Several variables also showed a low degree of completeness and/or changed their values from one dataset to another (e.g. the variable ‘underlying conditions’ had more than half of cases showing different information between datasets). There were also significant inconsistencies between the number of cases and deaths due to COVID-19 shown in DGSAugust and by the DGS reports publicly provided daily.ConclusionsThe low quality of COVID-19 surveillance datasets limits its usability to inform good decisions and perform useful research. Major improvements in surveillance datasets are therefore urgently needed - e.g. simplification of data entry processes, constant monitoring of data, and increased training and awareness of health care providers - as low data quality may lead to a deficient pandemic control.


2021 ◽  
pp. 004912412199553
Author(s):  
Jan-Lucas Schanze

An increasing age of respondents and cognitive impairment are usual suspects for increasing difficulties in survey interviews and a decreasing data quality. This is why survey researchers tend to label residents in retirement and nursing homes as hard-to-interview and exclude them from most social surveys. In this article, I examine to what extent this label is justified and whether quality of data collected among residents in institutions for the elderly really differs from data collected within private households. For this purpose, I analyze the response behavior and quality indicators in three waves of Survey of Health, Ageing and Retirement in Europe. To control for confounding variables, I use propensity score matching to identify respondents in private households who share similar characteristics with institutionalized residents. My results confirm that most indicators of response behavior and data quality are worse in institutions compared to private households. However, when controlling for sociodemographic and health-related variables, differences get very small. These results suggest the importance of health for the data quality irrespective of the housing situation.


2020 ◽  
Vol 10 (1) ◽  
pp. 1-16
Author(s):  
Isaac Nyabisa Oteyo ◽  
Mary Esther Muyoka Toili

AbstractResearchers in bio-sciences are increasingly harnessing technology to improve processes that were traditionally pegged on pen-and-paper and highly manual. The pen-and-paper approach is used mainly to record and capture data from experiment sites. This method is typically slow and prone to errors. Also, bio-science research activities are often undertaken in remote and distributed locations. Timeliness and quality of data collected are essential. The manual method is slow to collect quality data and relay it in a timely manner. Capturing data manually and relaying it in real time is a daunting task. The data collected has to be associated to respective specimens (objects or plants). In this paper, we seek to improve specimen labelling and data collection guided by the following questions; (1) How can data collection in bio-science research be improved? (2) How can specimen labelling be improved in bio-science research activities? We present WebLog, an application that we prototyped to aid researchers generate specimen labels and collect data from experiment sites. We use the application to convert the object (specimen) identifiers into quick response (QR) codes and use them to label the specimens. Once a specimen label is successfully scanned, the application automatically invokes the data entry form. The collected data is immediately sent to the server in electronic form for analysis.


Sign in / Sign up

Export Citation Format

Share Document