Mitigating imperfect data validity in administrative data PSIs: a method for estimating true adverse event rates

2021 ◽  
Vol 33 (1) ◽  
Author(s):  
Bastien Boussat ◽  
Hude Quan ◽  
Jose Labarere ◽  
Danielle Southern ◽  
Chantal M Couris ◽  
...  

Abstract Question Are there ways to mitigate the challenges associated with imperfect data validity in Patient Safety Indicator (PSI) report cards? Findings Applying a methodological framework on simulated PSI report card data, we compare the adjusted PSI rates of three hospitals with variable quality of data and coding. This framework combines (i) a measure of PSI rates using existing algorithms; (ii) a medical record review on a small random sample of charts to produce a measure of hospital-specific data validity and (iii) a simple Bayesian calculation to derive estimated true PSI rates. For example, the estimated true PSI rate, for a theoretical hospital with a moderately good quality of coding, could be three times as high as the measured rate (for example, 1.4% rather than 0.5%). For a theoretical hospital with relatively poor quality of coding, the difference could be 50-fold (for example, 5.0% rather than 0.1%). Meaning Combining a medical chart review on a limited number of medical charts at the hospital level creates an approach to producing health system report cards with estimates of true hospital-level adverse event rates.

Author(s):  
Enes Sari ◽  
Levent FAZLI Umur

BACKGROUND:The aim of this study was to evaluate the information quality of YouTube videos on hallux valgus. METHODS:A YouTube search was performed using the keyword 'hallux valgus' to determine the first 300 videos related to hallux valgus. A total of 54 videos met our inclusion criteria and evaluated for information quality by using DISCERN, Journal of the American Medical Association (JAMA) and hallux valgus information assessment (HAVIA) scores. Number of views, time since the upload date, view rate, number of comments, number of likes, number of dislikes, video power index (VPI) values were calculated to determine video popularity. Video length (sec), video source and video content were also noted. The relation between information quality and these factors were statistically evaluated. RESULTS:The mean DISCERN score was 30.35{plus minus}11.56 (poor quality) (14-64), the mean JAMA score was 2.28{plus minus}0.96 (1-4), and the mean HAVIA score was 3.63{plus minus}2.42 (moderate quality) (0.5-8.5). Although videos uploaded by physicians had higher mean DISCERN, JAMA, and HAVIA scores than videos uploaded by non-physicians, the difference was not statistically significant. Additionally, view rates and VPI values were higher for videos uploaded by health channels, but the difference did not reach statistical significance. A statistically significant positive correlation was found between video length and DISCERN (r= 0.294, p= 0.028), and HAVIA scores (r= 0.326, p= 0.015). CONCLUSIONS:This present study demonstrated that the quality of information available on YouTube videos about hallux valgus was low and insufficient. Videos containing accurate information from reliable sources are needed to educate patients on hallux valgus, especially in less frequently mentioned topics such as postoperative complications and healing period.


2018 ◽  
Vol 46 (6) ◽  
pp. 851-877 ◽  
Author(s):  
Abel Kinyondo ◽  
Riccardo Pelizzo
Keyword(s):  

Author(s):  
Manjunath Ramachandra

The data gathered from the sources are often noisy Poor quality of data results in business losses that keep increasing down the supply chain. The end customer finds it absolutely useless and misguiding. So, cleansing of data is to be performed immediately and automatically after the data acquisition. This chapter provides the different techniques for data cleansing and processing to achieve the same.


2019 ◽  
Vol 64 (6) ◽  
pp. 5-15
Author(s):  
Iwona Markowicz ◽  
Paweł Baran

Official statistics on trade in goods between EU member states are collect-ed on country-level and then aggregated by Eurostat. Methodology of data collecting differs slightly between member states(e.g. various statistical thresholds and coverage), including differences in exchange rates as well as undeclared or late-declared transac-tions, errors in classification of goods and other mistakes. It often involves incomparability of mirror data (nominally concerning the same transactions recorded in statistics of both dispatcher and receiver countries). A huge part of these differences can be explained with the variable quality of data resources in the Eurostat database. In the study data quality on intra-EU trade in goods for 2017 was compared between Poland and neigh-bouring EU countries, i.e.:Germany, Czech Republic, Slovakia, Lithuania,and other Baltic states–Latvia and Estonia. The additional aim was to indicate the directions hav-ing the greatestinfluence on the observed differences in mirror data. The results of the study indicate that the declarations made in Estonia affect the poor quality of data on trade in goods between the countries mentioned above to the greatest extent.


10.28945/2584 ◽  
2002 ◽  
Author(s):  
Herna L. Viktor ◽  
Wayne Motha

Increasingly, large organizations are engaging in data warehousing projects in order to achieve a competitive advantage through the exploration of the information as contained therein. It is therefore paramount to ensure that the data warehouse includes high quality data. However, practitioners agree that the improvement of the quality of data in an organization is a daunting task. This is especially evident in data warehousing projects, which are often initiated “after the fact”. The slightest suspicion of poor quality data often hinders managers from reaching decisions, when they waste hours in discussions to determine what portion of the data should be trusted. Augmenting data warehousing with data mining methods offers a mechanism to explore these vast repositories, enabling decision makers to assess the quality of their data and to unlock a wealth of new knowledge. These methods can be effectively used with inconsistent, noisy and incomplete data that are commonplace in data warehouses.


1973 ◽  
Vol 4 (4) ◽  
pp. 323-327 ◽  

AbstractIn a Danish spring the life cycle of Sericostoma personatum Spence (syn. S. pedemontanum Mac-Lachlan) took three years. A comparison is made with a I½ year life cycle reported by Elliott (I969). The poor quality of the food and lower temperature in the spring area are suggested as explanations of the difference. A slight growth retardation was found for all three year classes in winter and was referred to lower temperatures.


2009 ◽  
Vol 11 (2) ◽  
Author(s):  
L. Marshall ◽  
R. De la Harpe

[email protected] Making decisions in a business intelligence (BI) environment can become extremely challenging and sometimes even impossible if the data on which the decisions are based are of poor quality. It is only possible to utilise data effectively when it is accurate, up-to-date, complete and available when needed. The BI decision makers and users are in the best position to determine the quality of the data available to them. It is important to ask the right questions of them; therefore the issues of information quality in the BI environment were established through a literature study. Information-related problems may cause supplier relationships to deteriorate, reduce internal productivity and the business' confidence in IT. Ultimately it can have implications for an organisation's ability to perform and remain competitive. The purpose of this article is aimed at identifying the underlying factors that prevent information from being easily and effectively utilised and understanding how these factors can influence the decision-making process, particularly within a BI environment. An exploratory investigation was conducted at a large retail organisation in South Africa to collect empirical data from BI users through unstructured interviews. Some of the main findings indicate specific causes that impact the decisions of BI users, including accuracy, inconsistency, understandability and availability of information. Key performance measures that are directly impacted by the quality of data on decision-making include waste, availability, sales and supplier fulfilment. The time spent on investigating and resolving data quality issues has a major impact on productivity. The importance of documentation was highlighted as an important issue that requires further investigation. The initial results indicate the value of


2020 ◽  
Vol 30 (1) ◽  
pp. 119-129
Author(s):  
Jacob Anderson ◽  
Shailesh Shori ◽  
Esmaiel Jabbari ◽  
Harry J. Ploehn ◽  
Francis Gadala-Maria ◽  
...  

Abstract This paper examines the relationship between rheology and the qualitative appearance of dried, mica-based paint coatings used in the aerospace industry. The goal is to identify key rheological characteristics indicative of poor coating visual appearance, providing a screening tool to identify unsatisfactory paint formulations. Four mica paints were studied, having coating visual appearances ranging from very poor to very good. Strain sweeps indicated that the poor-quality paints have a smaller % strain midpoint in the linear visco-elastic range; while the good-quality paints have a lower G’/G” cross-over point in frequency sweeps. Thixotropy experiments utilizing single and multiple-loop hysteresis cycles plotting shear stress as a function of shear rate showed that the base mica paints with good appearance had nearly constant, reversible profiles in the forward and the backward directions; while the mica paints with poor appearance were irreversible with a noticeable gradual change in shear stress as more loops are run. The difference in area between the forward and the reverse curves was determined, leading to a quantifiable criterion that can differentiate good paints from poor paints with significance testing. This work would establish the first rheology model using hysteresis loops to predict the visual properties of mica-based paints.


1980 ◽  
Vol 70 (5) ◽  
pp. 1833-1847
Author(s):  
Harsh K. Gupta ◽  
C. V. Rama Krishna Rao ◽  
B. K. Rastogi ◽  
S. C. Bhatia

abstract Twelve earthquakes of Ms ≧ 4.0, their foreshocks and aftershocks, which occurred during the period October 1973 through December 1976 in the vicinity of the Koyna Dam, Maharashtra have been investigated using the seismograms from the Koyna seismic network, WWSSN seismic station at Poona (POO), and the NGRI seismic station (HYB) at Hyderabad. In all 71 hypocenters are located. Due to paucity/poor quality of data, the locations are mainly fair to poor in quality. Inferred focal depths are less than 15 km. These hypocenter locations indicate the possibility of the existence of a N-S trending fault at 73°45′E longitude. An empirical relation between signal duration (τ) and surface-wave magnitude (Ms), Ms = −2.44 + 2.61 log τ, is obtained for the region. This relation yields more reliable estimates of magnitudes. Composite focal mechanism solutions could be obtained for eight earthquakes with Ms ≧ 4. These solutions are mostly consistent with a N-S trending fault. Energy release patterns have been investigated for four sequences. A major portion of energy is released through the main shock.


1998 ◽  
Vol 11 (2) ◽  
pp. 231-253 ◽  
Author(s):  
Jennie Macdiarmid ◽  
John Blundell

AbstractUnder-reporting of food intake is one of the fundamental obstacles preventing the collection of accurate habitual dietary intake data. The prevalence of under-reporting in large nutritional surveys ranges from 18 to 54% of the whole sample, but can be as high as 70% in particular subgroups. This wide variation between studies is partly due to different criteria used to identify under-reporters and also to non-uniformity of under-reporting across populations. The most consistent differences found are between men and women and between groups differing in body mass index. Women are more likely to under-report than men, and under-reporting is more common among overweight and obese individuals. Other associated characteristics, for which there is less consistent evidence, include age, smoking habits, level of education, social class, physical activity and dietary restraint.Determining whether under-reporting is specific to macronutrients or food is problematic, as most methods identify only low energy intakes. Studies that have attempted to measure under-reporting specific to macronutrients express nutrients as percentage of energy and have tended to find carbohydrate under-reported and protein over-reported. However, care must be taken when interpreting these results, especially when data are expressed as percentages. A logical conclusion is that food items with a negative health image (e.g. cakes, sweets, confectionery) are more likely to be under-reported, whereas those with a positive health image are more likely to be over-reported (e.g. fruits and vegetables). This also suggests that dietary fat is likely to be under-reported.However, it is necessary to distinguish between under-reporting and genuine under-eating for the duration of data collection. The key to understanding this problem, but one that has been widely neglected, concerns the processes that cause people to under-report their food intakes. The little work that has been done has simply confirmed the complexity of this issue. The importance of obtaining accurate estimates of habitual dietary intakes so as to assess health correlates of food consumption can be contrasted with the poor quality of data collected. This phenomenon should be considered a priority research area. Moreover, misreporting is not simply a nutritionist's problem, but requires a multidisciplinary approach (including psychology, sociology and physiology) to advance the understanding of under-reporting in dietary intake studies.


Sign in / Sign up

Export Citation Format

Share Document