scholarly journals Data Quality – Whose Responsibility is it?

Author(s):  
Arthur Chapman

The quality of biodiversity data is an on-going issue. Early efforts to improve quality go back at least 4 decades, but it has never risen to the level of importance that it should have. For far too long the push to database more and more data regardless of its quality has taken priority. So I pose the question - what is the use of having lots of data if 1) we don’t know what its quality is, and 2) if much of it is not fit for use? When data-basing of herbarium and museum collections began in the 1970s many taxonomists saw the only use of the data as being for taxonomic purposes. But as more and more data has become digitally available, so too the uses to which the data can be put. It has also become increasingly important that the data we have in our herbaria and museums be put to more uses to justify on-going support and funding. But whose responsibility is data quality? To answer that I take you to general data quality principles – i.e. that the difficulty and the cost of improving the quality of the data increases the further you move from its source. Responsibility for data quality rests with everyone. Collectors of the specimens Database designers and builders Data entry operators Data curators and managers Those responsible for exchanging/exporting the data Data aggregators Data publishers Data users Collectors of the specimens Database designers and builders Data entry operators Data curators and managers Those responsible for exchanging/exporting the data Data aggregators Data publishers Data users We all have responsibilities. So, what can we each do to play our part? We need to work together at all levels of the data chain. We need to develop systems whereby feedback on quality from wherever it comes can be documented and fed back. It is no use continually making corrections to the data down the line if those corrections never get back to the data curators and data custodians. It is also of little use if the information fed back goes nowhere and nothing is done with it. The TDWG Data Quality Interest Group is working on setting up standards and tools to help make this possible. We have developed a Framework for Data Quality, we have developed a set of core tests for data quality, and assertions for feeding information back to custodians and forward to users and is beginning a process to deal with vocabularies of value for biodiversity data.

Author(s):  
W.J. Becker

ABSTRACT:The triptans represent a major advance in migraine therapy but their cost per dose greatly exceeds that of many older treatments. There is evidence that for a significant proportion of migraine patients these new drugs can show a positive cost benefit and also improve quality of life. Cost benefit would be expected to be greatest in patients with more severe migraine attacks.


Author(s):  
Hatice Uenal ◽  
David Hampel

Registries are indispensable in medical studies and provide the basis for reliable study results for research questions. Depending on the purpose of use, a high quality of data is a prerequisite. However, with increasing registry quality, costs also increase accordingly. Considering these time and cost factors, this work is an attempt to estimate the cost advantages of applying statistical tools to existing registry data, including quality evaluation. Results for quality analysis showed that there are unquestionable savings of millions in study costs by reducing the time horizon and saving on average € 523,126 for every reduced year. Replacing additionally the over 25 % missing data in some variables, data quality was immensely improved. To conclude, our findings showed dearly the importance of data quality and statistical input in avoiding biased conclusions due to incomplete data.


Author(s):  
Shilen Shanghavi

Asthma is a chronic inflammatory lung condition characterised by variable respiratory symptoms (wheeze, shortness of breath, cough, and chest tightness) and variable expiratory airflow limitation, usually associated with airway inflammation. It affects 1-in-11 people in the UK and is the cause of over 75 000 hospital admissions per year. Given its prevalence, and the fact that patients are mainly cared for in the community, this article aims to highlight the need for a thorough annual asthma review and what that review entails. When carried out effectively, an asthma review will improve quality of life for those living with the condition, reduce their likelihood of hospital admission and reduce the cost to the NHS as a whole.


2011 ◽  
Vol 1 ◽  
pp. 28 ◽  
Author(s):  
Amit Sura ◽  
Alexander Ho

Radiology has been the focus of efforts to reduce inefficiencies while attempting to lower medical costs. The 2010 Medicare Physician Fee Schedule has reduced Centers for Medicare and Medicaid Services’ (CMS) reimbursements related to the technical component of imaging services. By increasing the utilization rate, the cost of equipment spreads over more studies, thus lowering the payments per procedure. Is it beneficial for CMS to focus on equipment utilization as a cost-cutting measure? Can greater financial and quality of care rewards be made by improving metrics like appropriateness criteria and pre-authorization? On examining quality metrics, such as appropriateness criteria and pre-authorization, promising results have ensued. The development and enforcement of appropriateness criteria lowers overutilization of studies without requiring unattainable fixed rates. Pre-authorization educates ordering physicians as to when imaging is indicated.


2017 ◽  
Vol 2 (1) ◽  
pp. 135-135
Author(s):  
Neda Firouraghi ◽  
Shahrokh Ezzatzadegan Jahromi ◽  
Ashkan Sami ◽  
Roxana Sharifian

2019 ◽  
Author(s):  
Bianca Maria Maglia Orlandi ◽  
Omar Asdrubal VilcaMejia ◽  
Maxim Goncharov ◽  
Kenji Nakahara Rocha ◽  
Lucas Bassolli ◽  
...  

AbstractBackgroundElectronic health records databases are important sources of data for research and health practice. The aim of this study was to assess the quality of the data in REPLICCAR II, the Brazilian cardiovascular surgery database based in São Paulo State.Study DesignThe REPLICCAR II database contains data from 9 institutions in São Paulo, with more than 700 variables. We audited data entry at 6 months (n=107 records) and 1 year (n=2229 records) after the start of data collection. We present a modified Aggregate Data Quality Score (ADQ) for 30 variables in this analysis.ResultsThe agreement between the data independently entered by a database operator and a researcher was good for categorical data (Cohen κ = 0.70, 95%CI 059, 0.83). For continuous data, the intraclass coefficient was high for all variables, with only 2 of 15 continuous variables having an ICC of less than 0.90. In an indirect audit, 74% of the selected variables (n = 23) showed a good ADQ score, regarding completeness and reliability.ConclusionsData entry in the REPLICCAR II database is satisfactory and can provide accurate and reliable data for research in cardiovascular surgery in Brazil.


2021 ◽  
Author(s):  
Herbert Mauch ◽  
Jasmin Kaur ◽  
Colin Irwin ◽  
Josie Wyss

Abstract Background Registries are powerful clinical investigational tools. More challenging, however, is an international registry conducted by industry. That requires considerable planning, clear objectives and endpoints, resources and appropriate measurement tools. Methods This paper aims to summarize our learning from ten years of running a medical device registry monitoring patient-reported benefits from hearing implants. Results We enlisted 113 participating clinics globally, resulting in a total enrolment of more than 1500 hearing-implant users. We identify the stages in developing a registry specific to a sensory handicap such as hearing loss, its challenges and successes in design and implementation, and recommendations for future registries. Conclusions Data collection infrastructure needs to be maintained up to date throughout the defined registry lifetime and provide adequate oversight of data quality and completeness. Compliance at registry sites is important for data quality and needs to be weighed against the cost of site monitoring. To motivate sites to provide accurate and timely data entry we facilitated easy access to their own data which helped to support their clinical routine. Trial registration: ClinicalTrials.gov NCT02004353


2012 ◽  
Vol 3 (2) ◽  
pp. 33-49
Author(s):  
Lorena Zúñiga-Segura ◽  
Elisa Sánchez-Godínez

En los diferentes procesos que se llevan a cabo en una institución o empresa se utiliza información, es por ello que desde la recolección o captura de los datos debe contemplarse lacalidad de los mismos. El presente artículo describe una metodología para la evaluación de la calidad de los datospropuesta por el autor Arkady Maydanchik y su aplicación a una base de datos determinada.  Los resultados generales que se obtuvieron permiten señalar oportunidades de mejora, que contribuirán con la calidad de los datos, y por ende, en todos los procesos que hacen uso de ellos para la toma de decisiones. El análisis fue realizado en el año 2011, se evaluó información que pertenece a la Universidad Estatal a Distancia (UNED) de Costa Rica, la cual fue registrada entre los años 1980 al 2011.Palabras clave: evaluación de la calidad de datos, información, bases de datos.AbstractOrganizations perform different kinds of processes that use information. For this reason, the data quality must be taken into account from the data entry activities through organizational information systems. This article describes the data quality assessment methodology proposed by Arkady Maydanchick, which was applied in order to assess the data quality of an organizational database.  The findings show that there are opportunities for improvements, which will contribute to data quality and therefore all processes that use these data for decision making support. The analysis was conducted in 2011; information from the Universidad Estatal a Distancia (UNED) of CostaRica was evaluated, the database assessed contains information between the years 1980 to 2011.Keywords: data quality assessment, information, databases.


2020 ◽  
Author(s):  
Cristina Costa-Santos ◽  
Ana Luísa Neves ◽  
Ricardo Correia ◽  
Paulo Santos ◽  
Matilde Monteiro-Soares ◽  
...  

AbstractBackgroundHigh-quality data is crucial for guiding decision making and practicing evidence-based healthcare, especially if previous knowledge is lacking. Nevertheless, data quality frailties have been exposed worldwide during the current COVID-19 pandemic. Focusing on a major Portuguese surveillance dataset, our study aims to assess data quality issues and suggest possible solutions.MethodsOn April 27th 2020, the Portuguese Directorate-General of Health (DGS) made available a dataset (DGSApril) for researchers, upon request. On August 4th, an updated dataset (DGSAugust) was also obtained. The quality of data was assessed through analysis of data completeness and consistency between both datasets.ResultsDGSAugust has not followed the data format and variables as DGSApril and a significant number of missing data and inconsistencies were found (e.g. 4,075 cases from the DGSApril were apparently not included in DGSAugust). Several variables also showed a low degree of completeness and/or changed their values from one dataset to another (e.g. the variable ‘underlying conditions’ had more than half of cases showing different information between datasets). There were also significant inconsistencies between the number of cases and deaths due to COVID-19 shown in DGSAugust and by the DGS reports publicly provided daily.ConclusionsThe low quality of COVID-19 surveillance datasets limits its usability to inform good decisions and perform useful research. Major improvements in surveillance datasets are therefore urgently needed - e.g. simplification of data entry processes, constant monitoring of data, and increased training and awareness of health care providers - as low data quality may lead to a deficient pandemic control.


2018 ◽  
Vol 2 ◽  
pp. e25310
Author(s):  
Fhatani Ranwashe

Georeferencing helps to fill in biodiversity information gaps, allowing biodiversity data to be represented spatially to allow for valuable assessments to be conducted. The South African National Biodiversity Institute has embarked on a number of projects that have required the georeferencing of biodiversity data to assist in assessments for redlisting of species and measuring the protection levels of species. Data quality in biodiversity information is an important aspect. Due to a lack of standardisation in collection and recording methods historical biodiversity data collections provide a challenge when it comes to ascertaining fitness for use or determining the quality of data. The quality of historical locality information recorded in biodiversity data collections faces the scrutiny of fitness for use as these information is critical in performing assessments. The lack of descriptive locality information, or ambiguous locality information deems most historical biodiversity records unfit for use. Georeferencing should essentially improve the quality of biodiversity data, but how do you measure the fitness for use of georeferenced data? Through the use of the Darwin Core coordinateUncertaintyinMeters, georeferenced data can be queried to investigate and determine the quality of the georeferenced data produced. My presentation will cover the scope of ascertaining georeferenced data quality through the use of the DarwinCore term coordinateUncertatintyInMeters, the impacts of using a controlled vocabulary in representing the coordinateUncertaintyInMeters, and will highlight how SANBI’s georeferencing efforts have contributed to data quality within the management of biodiversity information.


Sign in / Sign up

Export Citation Format

Share Document