scholarly journals Mortality evolution in Algeria: What can we learn about data quality?

Author(s):  
Farid Flici ◽  
Nacer-Eddine Hammouda

Mortality in Algeria has declined significantly since the country declared its independence in 1962. This trend has been accompanied by improvements in data quality and changes in estimation methodology, both of which are scarcely documented, and may distort the natural evolution of mortality as reported in official statistics. In this paper, our aim is to detect these methodological and data quality changes by means of the visual inspection of mortality surfaces, which represent the evolution of mortality rates, mortality improvement rates and the male-female mortality ratio over age and time. Data quality problems are clearly visible during the 1977–1982 period. The quality of mortality data has improved after 1983, and even further since the population census of 1998, which coincided with the end of the civil war. Additional inexplicable patterns have also been detected, such as a changing mortality age pattern during the period before 1983, and a changing pattern of excess female mortality at reproductive ages, which suddenly appears in 1983 and disappears in 1992.

2019 ◽  
Vol 214 ◽  
pp. 01007
Author(s):  
Virginia Azzolin ◽  
Michael Andrews ◽  
Gianluca Cerminara ◽  
Nabarun Dev ◽  
Colin Jessop ◽  
...  

The Compact Muon Solenoid (CMS) experiment dedicates significant effort to assess the quality of its data, online and offline. A real-time data quality monitoring system is in place to spot and diagnose problems as promptly as possible to avoid data loss. The a posteriori evaluation of processed data is designed to categorize it in terms of their usability for physics analysis. These activities produce data quality metadata. The data quality evaluation relies on a visual inspection of the monitoring features. This practice has a cost in term of human resources and is naturally subject to human arbitration. Potential limitations are linked to the ability to spot a problem within the overwhelming number of quantities to monitor, or to the lack of understanding of detector evolving conditions. In view of Run 3, CMS aims at integrating deep learning technique in the online workflow to promptly recognize and identify anomalies and improve data quality metadata precision. The CMS experiment engaged in a partnership with IBM with the objective to support, through automatization, the online operations and to generate benchmarking technological results. The research goals, agreed within the CERN Openlab framework, how they matured in a demonstration applic tion and how they are achieved, through a collaborative contribution of technologies and resources, are presented


2022 ◽  
Vol 10 (01) ◽  
pp. 508-518
Author(s):  
Richmond Nsiah ◽  
Wisdom Takramah ◽  
Solomon Anum-Doku ◽  
Richard Avagu ◽  
Dominic Nyarko

Background: Stillbirths and neonatal deaths when poorly documented or collated, negatively affect the quality of decision and interventions. This study sought to assess the quality of routine neonatal mortalities and stillbirth records in health facilities and propose interventions to improve the data quality gaps. Method: Descriptive cross-sectional study was employed. This study was carried out at three (3) purposively selected health facilities in Offinso North district. Stillbirths and neonatal deaths recorded in registers from 2015 to 2017, were recounted and compared with monthly aggregated data and District Health Information Management System 2 (DHIMS 2) data using a self-developed Excel Data Quality Assessment Tool (DQS).  An observational checklist was used to collect primary data on completeness and availability. Accuracy ratio (verification factor), discrepancy rate, percentage availability and completeness of stillbirths and neonatal mortality data were computed using the DQS tool. Findings: The results showed high discrepancy rate of stillbirth data recorded in registers compared with monthly aggregated reports (12.5%), and monthly aggregated reports compared with DHIMS 2 (13.5%). Neonatal mortalities data were under-reported in monthly aggregated reports, but over-reported in DHIMS 2. Overall data completeness was about 84.6%, but only 68.5% of submitted reports were supervised by facility in-charges. Delivery and admission registers availability were 100% and 83.3% respectively. Conclusion: Quality of stillbirths and neonatal mortality data in the district is generally encouraging, but are not reliable for decision-making. Routine data quality audit is needed to reduce high discrepancies in stillbirth and neonatal mortality data in the district.


2018 ◽  
Vol 2 ◽  
pp. e26665
Author(s):  
Alan Stenhouse ◽  
Philip Roetman ◽  
Frank Grützner ◽  
Tahlia Perry ◽  
Lian Pin Koh

Field data collection by Citizen Scientists has been hugely assisted by the rapid development and spread of smart phones as well as apps that make use of the integrated technologies contained in these devices. We can improve the quality of the data by increasing utilisation of the device in-built sensors and improving the software user-interface. Improvements to data timeliness can be made by integrating directly with national and international biodiversity repositories, such as the Atlas of Living Australia (ALA). I will present two Citizen Science apps that we developed for the conservation of two of Australia’s iconic species – the koala and the echidna. First is the Koala Counter app used in the Great Koala Count 2 – a two-day Blitz-style population census. The aim was to improve both the recording of citizen science effort as well as to improve the recording of “absence” data which would improve population modelling. Our solution was to increase the transparent use of the phone sensors as well as providing an easy-to-use user interface. Second is the EchidnaCSI app – an observational tool for collecting sightings and samples of echidna. From a software developer’s perspective, I will provide details on multi-platform app development as well as collaboration and integration with the Australian national biodiversity repository – the Atlas of Living Australia. Preliminary analysis regarding data quality will be presented along with lessons learned and paths for future research. I also seek feedback and further ideas on possible enhancements or modifications that might usefully be made to improve these techniques.


2019 ◽  
Vol 36 ◽  
pp. 1-20
Author(s):  
Andrea Fernand Jubithana ◽  
Bernardo Lanza Queiroz

Suriname statistical office assumes that mortality data in the country is of good quality and does not perform any test before producing life table estimates. However, lack of data quality is a concern in the less developed areas of the world. The primary objective of this article is to evaluate the quality of death counts registration in the country and its main regions from 2004 to 2012 and to produce estimates of adult mortality by sex. We use data from population, by age and sex, from the last censuses and death counts from the Statistical office. We use traditional demographic methods to perform the analysis. We find that the quality of the death countregistration in Suriname and its central regions is reasonably good. We also find that population data can be considered good. The results reveal a small difference in the completeness for males and females and that for the sub-national population the choice of method has implication on the results. To sum up, data quality in Suriname is better than in most countries in the region, but there are considerable regional differences as observed in other locations.


2019 ◽  
Vol 22 (suppl 3) ◽  
Author(s):  
Renato Azeredo Teixeira ◽  
Mohsen Naghavi ◽  
Mark Drew Crosland Guimarães ◽  
Lenice Harumi Ishitani ◽  
Elizabeth Barboza França

ABSTRACT Introduction: reliability of mortality data is essential for health assessment and planning. In Brazil, a high proportion of deaths is attributed to causes that should not be considered as underlying causes of deaths, named garbage codes (GC). To tackle this issue, in 2005, the Brazilian Ministry of Health (MoH) implements the investigation of GC-R codes (codes from chapter 18 “Symptoms, signs and abnormal clinical and laboratory findings, not elsewhere classified, ICD-10”) to improve the quality of cause-of-death data. This study analyzes the GC cause of death, considered as the indicator of data quality, in Brazil, regions, states and municipalities in 2000 and 2015. Methods: death records from the Brazilian Mortality Information System (SIM) were used. Analysis was performed for two GC groups: R codes and non-R codes, such as J18.0-J18.9 (Pneumonia unspecified). Crude and age-standardized rates, number of deaths and proportions were considered. Results: an overall improvement in the quality of mortality data in 2015 was detected, with variations among regions, age groups and size of municipalities. The improvement in the quality of mortality data in the Northeastern and Northern regions for GC-R codes is emphasized. Higher GC rates were observed among the older adults (60+ years old). The differences among the areas observed in 2015 were smaller. Conclusion: the efforts of the MoH in implementing the investigation of GC-R codes have contributed to the progress of data quality. Investment is still necessary to improve the quality of cause-of-death statistics.


2021 ◽  
Author(s):  
Kemal N Siregar ◽  
Budi Utomo ◽  
Rico Kurniawan ◽  
Retnowati Retnowati ◽  
Tris Eryando ◽  
...  

BACKGROUND Maternal and child health (MCH) remains an important agenda item in the Sustainable Development Goals (SDGs). Unfortunately, despite strong commitments, the maternal mortality ratio in Indonesia remains high. Program performance, particularly good midwife performance, is the main factor that can reduce the maternal mortality ratio. Improving midwife performance must be supported by the availability of data or evidence. However, midwives still experience problems with data collection in the MCH program. OBJECTIVE This study aimed to strengthen the quality of maternal and child health (MCH) data produced by mHealth. METHODS Three research avenues were evaluated: a) ensuring quality data are produced, b) building mHealth to fit the needs of midwives so that mHealth is acceptable to midwives, and c) identifying challenges midwives face when using mHealth RESULTS The MCH data generated by mHealth met the data quality criteria consisting of data completeness, correctness, currentness, and consistency. Midwives in the villages showed enthusiasm for using mHealth and accepted the mHealth which is supports their daily work. One of the challenges of using mHealth is the lack of integration with the Community Health Center information system. CONCLUSIONS The mHealth system produces quality data that can improve the current poor data quality, and this application can be easily used by midwives. The midwives generally accepted the application and agreed that mHealth helps with their daily work in MCH services CLINICALTRIAL This research has obtained ethical permission from the ethics commission of the Public Health Faculty Universitas Indonesia register number 477/UN2.F10/PPM.00.02/2019.


10.2196/17619 ◽  
2020 ◽  
Vol 9 (8) ◽  
pp. e17619
Author(s):  
Neha Shah ◽  
Diwakar Mohan ◽  
Jean Juste Harisson Bashingwa ◽  
Osama Ummer ◽  
Arpita Chakraborty ◽  
...  

Background Data quality is vital for ensuring the accuracy, reliability, and validity of survey findings. Strategies for ensuring survey data quality have traditionally used quality assurance procedures. Data analytics is an increasingly vital part of survey quality assurance, particularly in light of the increasing use of tablets and other electronic tools, which enable rapid, if not real-time, data access. Routine data analytics are most often concerned with outlier analyses that monitor a series of data quality indicators, including response rates, missing data, and reliability of coefficients for test-retest interviews. Machine learning is emerging as a possible tool for enhancing real-time data monitoring by identifying trends in the data collection, which could compromise quality. Objective This study aimed to describe methods for the quality assessment of a household survey using both traditional methods as well as machine learning analytics. Methods In the Kilkari impact evaluation’s end-line survey amongst postpartum women (n=5095) in Madhya Pradesh, India, we plan to use both traditional and machine learning–based quality assurance procedures to improve the quality of survey data captured on maternal and child health knowledge, care-seeking, and practices. The quality assurance strategy aims to identify biases and other impediments to data quality and includes seven main components: (1) tool development, (2) enumerator recruitment and training, (3) field coordination, (4) field monitoring, (5) data analytics, (6) feedback loops for decision making, and (7) outcomes assessment. Analyses will include basic descriptive and outlier analyses using machine learning algorithms, which will involve creating features from time-stamps, “don’t know” rates, and skip rates. We will also obtain labeled data from self-filled surveys, and build models using k-folds cross-validation on a training data set using both supervised and unsupervised learning algorithms. Based on these models, results will be fed back to the field through various feedback loops. Results Data collection began in late October 2019 and will span through March 2020. We expect to submit quality assurance results by August 2020. Conclusions Machine learning is underutilized as a tool to improve survey data quality in low resource settings. Study findings are anticipated to improve the overall quality of Kilkari survey data and, in turn, enhance the robustness of the impact evaluation. More broadly, the proposed quality assurance approach has implications for data capture applications used for special surveys as well as in the routine collection of health information by health workers. International Registered Report Identifier (IRRID) DERR1-10.2196/17619


2015 ◽  
Vol 7 (3) ◽  
Author(s):  
Julia Eaton ◽  
Ian Painter ◽  
Donald Olson ◽  
William Lober

Secondary use of clinical health data for near real-time public health surveillance presents challenges surrounding its utility due to data quality issues. Data used for real-time surveillance must be timely, accurate and complete if it is to be useful; if incomplete data are used for surveillance, understanding the structure of the incompleteness is necessary. Such data are commonly aggregated due to privacy concerns. The Distribute project was a near real-time influenza-like-illness (ILI) surveillance system that relied on aggregated secondary clinical health data. The goal of this work is to disseminate the data quality tools developed to gain insight into the data quality problems associated with these data. These tools apply in general to any system where aggregate data are accrued over time and were created through the end-user-as-developer paradigm. Each tool was developed during the exploratory analysis to gain insight into structural aspects of data quality. Our key finding is that data quality of partially accruing data must be studied in the context of accrual lag—the difference between the time an event occurs and the time data for that event are received, i.e. the time at which data become available to the surveillance system. Our visualization methods therefore revolve around visualizing dimensions of data quality affected by accrual lag, in particular the tradeoff between timeliness and completion, and the effects of accrual lag on accuracy.  Accounting for accrual lag in partially accruing data is necessary to avoid misleading or biased conclusions about trends in indicator values and data quality. 


2019 ◽  
Author(s):  
Neha Shah ◽  
Diwakar Mohan ◽  
Jean Juste Harisson Bashingwa ◽  
Osama Ummer ◽  
Arpita Chakraborty ◽  
...  

BACKGROUND Data quality is vital for ensuring the accuracy, reliability, and validity of survey findings. Strategies for ensuring survey data quality have traditionally used quality assurance procedures. Data analytics is an increasingly vital part of survey quality assurance, particularly in light of the increasing use of tablets and other electronic tools, which enable rapid, if not real-time, data access. Routine data analytics are most often concerned with outlier analyses that monitor a series of data quality indicators, including response rates, missing data, and reliability of coefficients for test-retest interviews. Machine learning is emerging as a possible tool for enhancing real-time data monitoring by identifying trends in the data collection, which could compromise quality. OBJECTIVE This study aimed to describe methods for the quality assessment of a household survey using both traditional methods as well as machine learning analytics. METHODS In the Kilkari impact evaluation’s end-line survey amongst postpartum women (n=5095) in Madhya Pradesh, India, we plan to use both traditional and machine learning–based quality assurance procedures to improve the quality of survey data captured on maternal and child health knowledge, care-seeking, and practices. The quality assurance strategy aims to identify biases and other impediments to data quality and includes seven main components: (1) tool development, (2) enumerator recruitment and training, (3) field coordination, (4) field monitoring, (5) data analytics, (6) feedback loops for decision making, and (7) outcomes assessment. Analyses will include basic descriptive and outlier analyses using machine learning algorithms, which will involve creating features from time-stamps, “don’t know” rates, and skip rates. We will also obtain labeled data from self-filled surveys, and build models using k-folds cross-validation on a training data set using both supervised and unsupervised learning algorithms. Based on these models, results will be fed back to the field through various feedback loops. RESULTS Data collection began in late October 2019 and will span through March 2020. We expect to submit quality assurance results by August 2020. CONCLUSIONS Machine learning is underutilized as a tool to improve survey data quality in low resource settings. Study findings are anticipated to improve the overall quality of Kilkari survey data and, in turn, enhance the robustness of the impact evaluation. More broadly, the proposed quality assurance approach has implications for data capture applications used for special surveys as well as in the routine collection of health information by health workers. CLINICALTRIAL INTERNATIONAL REGISTERED REPORT DERR1-10.2196/17619


Sign in / Sign up

Export Citation Format

Share Document