Measurement of the Impact of Risk Adjustment for Central Line–Days on Interpretation of Central Line–Associated Bloodstream Infection Rates

2007 ◽  
Vol 28 (9) ◽  
pp. 1025-1029 ◽  
Author(s):  
Jerome I. Tokars ◽  
R. Monina Klevens ◽  
Jonathan R. Edwards ◽  
Teresa C. Horan

Objective.To describe methods to assess the practical impact of risk adjustment for central line-days on the interpretation of central line–associated bloodstream infection (BSI) rates, because collecting these data is often burdensome.Methods.We analyzed data from 247 hospitals that reported to the adult and pediatric intensive care unit component of the National Nosocomial Infections Surveillance System from 1995 through 2003. For each unit each year, we calculated the percentile error as the absolute value of the difference between the percentile based on a risk-adjusted or more-sophisticated measure (eg, the central line–day rate) and the percentile based on a crude or less-sophisticated measure (eg, the patient-day rate). Using rate per central line–day as the “gold standard,” we calculated performance characteristics (eg, sensitivity and predictive values) of rate per patient-day for finding central line–associated BSI rates higher or lower than the mean. Greater impact of risk adjustment is indicated by higher values for percentile error and lower values for performance characteristics.Results.The median percentile error was ± 7 (ie, the percentile based on central line-days could be 7% higher or lower than the percentile based on patient-days). This error was less than 10 percentile points for 62% of the unit-years, was between 10 and 19 percentile points for 22% of the unit-years, and was 20 percentile points or more for 15% of the unit-years. Use of the rate based on patient-days had a sensitivity of 76% and a positive predictive value of 61% for detecting a significantly high or low central line–associated BSI rate.Conclusions.We found that risk adjustment for central line–days has an important impact on the calculated central line–associated BSI percentile for some units. Similar methods can be used to evaluate the impact of other risk adjustment methods. Our results support current recommendations to use central line–days for surveillance of central line–associated BSI when comparisons are made among facilities.

2013 ◽  
Vol 34 (7) ◽  
pp. 663-670 ◽  
Author(s):  
Aditya H. Gaur ◽  
Marlene R. Miller ◽  
Cuilan Gao ◽  
Carol Rosenberg ◽  
Gloria C. Morrell ◽  
...  

Objective.To evaluate the application of the National Healthcare Safety Network (NHSN) central line-associated bloodstream infection (CLABSI) definition in pediatric intensive care units (PICUs) and pediatric hematology/oncology units (PHOUs) participating in a multicenter quality improvement collaborative to reduce CLABSIs; to identify sources of variability in the application of the definition.Design.Online survey using 18 standardized case scenarios. Each described a positive blood culture in a patient and required a yes-or-no answer to the question “Is this a CLABSI?” NHSN staff responses were the reference standard.Setting.Sixty-five US PICUs and PHOUs.Participants.Staff who routinely adjudicate CLABSIs using NHSN definitions.Results.Sixty responses were received from 58 (89%) of 65 institutions; 78% of respondents were infection preventionists, infection control officers, or infectious disease physicians. Responses matched those of NHSN staff for 78% of questions. The mean (SE) percentage of concurring answers did not differ for scenarios evaluating application of 1 of the 3 criteria (“known pathogen,” 78% [1.7%]; “skin contaminant, >1 year of age,” 76% [SE, 2.5%]; “skin contaminant, ≤1 year of age,” 81% [3.8%]; P = .3 ). The mean percentage of concurring answers was lower for scenarios requiring respondents to determine whether a CLABSI was present or incubating on admission (64% [4.6%]; P = .017) or to distinguish between primary and secondary bacteremia (65% [2.5%]; P = .021).Conclusions.The accuracy of application of the CLABSI definition was suboptimal. Efforts to reduce variability in identifying CLABSIs that are present or incubating on admission and in distinguishing primary from secondary bloodstream infection are needed.


2020 ◽  
Vol 41 (S1) ◽  
pp. s116-s118
Author(s):  
Qunna Li ◽  
Andrea Benin ◽  
Alice Guh ◽  
Margaret A. Dudeck ◽  
Katherine Allen-Bridson ◽  
...  

Background: The NHSN has used positive laboratory tests for surveillance of Clostridioides difficile infection (CDI) LabID events since 2009. Typically, CDIs are detected using enzyme immunoassays (EIAs), nucleic acid amplification tests (NAATs), or various test combinations. The NHSN uses a risk-adjusted, standardized infection ratio (SIR) to assess healthcare facility-onset (HO) CDI. Despite including test type in the risk adjustment, some hospital personnel and other stakeholders are concerned that NAAT use is associated with higher SIRs than are EIAs. To investigate this issue, we analyzed NHSN data from acute-care hospitals for July 1, 2017 through June 30, 2018. Methods: Calendar quarters for which CDI test type was reported as NAAT (includes NAAT, glutamate dehydrogenase (GDH)+NAAT and GDH+EIA followed by NAAT if discrepant) or EIA (includes EIA and GDH+EIA) were selected. HO CDI SIRs were calculated for facility-wide inpatient locations. We conducted the following analyses: (1) Among hospitals that did not switch their test type, we compared the distribution of HO incident rates and SIRs by those reporting NAAT vs EIA. (2) Among hospitals that switched their test type, we selected quarters with a stable switch pattern of 2 consecutive quarters of each of EIA and NAAT (categorized as pattern EIA-to-NAAT or NAAT-to-EIA). Pooled semiannual SIRs for EIA and NAAT were calculated, and a paired t test was used to evaluate the difference of SIRs by switch pattern. Results: Most hospitals did not switch test types (3,242, 89%), and 2,872 (89%) reported sufficient data to calculate SIRs, with 2,444 (85%) using NAAT. The crude pooled HO CDI incidence rates for hospitals using EIA clustered at the lower end of the histogram versus rates for NAAT (Fig. 1). The SIR distributions of both NAAT and EIA overlapped substantially and covered a similar range of SIR values (Fig. 1). Among hospitals with a switch pattern, hospitals were equally likely to have an increase or decrease in their SIR (Fig. 2). The mean SIR difference for the 42 hospitals switching from EIA to NAAT was 0.048 (95% CI, −0.189 to 0.284; P = .688). The mean SIR difference for the 26 hospitals switching from NAAT to EIA was 0.162 (95% CI, −0.048 to 0.371; P = .124). Conclusions: The pattern of SIR distributions of both NAAT and EIA substantiate the soundness of NHSN risk adjustment for CDI test types. Switching test type did not produce a consistent directional pattern in SIR that was statistically significant.Disclosures: NoneFunding: None


2016 ◽  
Vol 29 (6) ◽  
pp. 373
Author(s):  
Jorge Rodrigues ◽  
Andrea Dias ◽  
Guiomar Oliveira ◽  
José Farela Neves

<p><strong>Introduction:</strong> To determine the central-line associated bloodstream infection rate after implementation of central venous catheter-care practice bundles and guidelines and to compare it with the previous central-line associated bloodstream infection rate.<br /><strong>Material and Methods:</strong> A prospective, longitudinal, observational descriptive study with an exploratory component was performed in a Pediatric Intensive Care Unit during five months. The universe was composed of every child admitted to Pediatric Intensive Care Unit who inserted a central venous catheter. A comparative study with historical controls was performed to evaluate the result of the intervention (group 1 <em>versus</em> group 2).<br /><strong>Results:</strong> Seventy five children were included, with a median age of 23 months: 22 (29.3%) newborns; 28 (37.3%) with recent surgery and 32 (43.8%) with underlying illness. A total of 105 central venous catheter were inserted, the majority a single central venous catheter (69.3%), with a mean duration of 6.8 ± 6.7 days. The most common type of central venous catheter was the short-term, non-tunneled central venous catheter (45.7%), while the subclavian and brachial flexure veins were the most frequent insertion sites (both 25.7%). There were no cases of central-line associated bloodstream infection reported during this study. Comparing with historical controls (group 1), both groups were similar regarding age, gender, department of origin and place of central venous catheter insertion. In the current study (group 2), the median length of stay was higher, while the mean duration of central venous catheter (excluding peripherally inserted central line) was similar in both groups. There were no statistical differences regarding central venous catheter caliber and number of lumens. Fewer children admitted to Pediatric Intensive Care Unit had central venous catheter inserted in group 2, with no significant difference between single or multiple central venous catheter.<br /><strong>Discussion:</strong> After multidimensional strategy implementation there was no reported central-line associated bloodstream infection<br /><strong>Conclusions:</strong> Efforts must be made to preserve the same degree of multidimensional prevention, in order to confirm the effective reduction of the central-line associated bloodstream infection rate and to allow its maintenance.</p>


2017 ◽  
Vol 38 (9) ◽  
pp. 1019-1024 ◽  
Author(s):  
Sarah S. Jackson ◽  
Surbhi Leekha ◽  
Laurence S. Magder ◽  
Lisa Pineles ◽  
Deverick J. Anderson ◽  
...  

BACKGROUNDRisk adjustment is needed to fairly compare central-line–associated bloodstream infection (CLABSI) rates between hospitals. Until 2017, the Centers for Disease Control and Prevention (CDC) methodology adjusted CLABSI rates only by type of intensive care unit (ICU). The 2017 CDC models also adjust for hospital size and medical school affiliation. We hypothesized that risk adjustment would be improved by including patient demographics and comorbidities from electronically available hospital discharge codes.METHODSUsing a cohort design across 22 hospitals, we analyzed data from ICU patients admitted between January 2012 and December 2013. Demographics and International Classification of Diseases, Ninth Edition, Clinical Modification (ICD-9-CM) discharge codes were obtained for each patient, and CLABSIs were identified by trained infection preventionists. Models adjusting only for ICU type and for ICU type plus patient case mix were built and compared using discrimination and standardized infection ratio (SIR). Hospitals were ranked by SIR for each model to examine and compare the changes in rank.RESULTSOverall, 85,849 ICU patients were analyzed and 162 (0.2%) developed CLABSI. The significant variables added to the ICU model were coagulopathy, paralysis, renal failure, malnutrition, and age. The C statistics were 0.55 (95% CI, 0.51–0.59) for the ICU-type model and 0.64 (95% CI, 0.60–0.69) for the ICU-type plus patient case-mix model. When the hospitals were ranked by adjusted SIRs, 10 hospitals (45%) changed rank when comorbidity was added to the ICU-type model.CONCLUSIONSOur risk-adjustment model for CLABSI using electronically available comorbidities demonstrated better discrimination than did the CDC model. The CDC should strongly consider comorbidity-based risk adjustment to more accurately compare CLABSI rates across hospitals.Infect Control Hosp Epidemiol 2017;38:1019–1024


2014 ◽  
Vol 20 (1) ◽  
pp. 54-57
Author(s):  
Rodrigo Dias Martins ◽  
Debora Cantergi ◽  
Jefferson Fagundes Loss

The kihapis a technique used in several oriental martial arts. It is a yell used by practitioners with the ex pectation of enhancing the force of a hit. However, the real effect of using the kihapis unknown. Therefore, this study aims to compare the peak of acceleration of the Dolio-chaguikick in taekwondo performed with and without the use of kihap. Twenty two experienced taekwondo practitioners performed 30 kicks each against a punching bag, alternating in random order with and without kihap, while the acceleration of the punching bag was measured. A t-test was used to compare the difference between the mean acceleration in both conditions. Higher values were found with the use of kihap(7.8 ± 2.8 g) than without the use of kihap(7.1 ± 2.4 g), p< 0.01, r= 0.57. The results indicate that kihapenhances the impact of the kick.


2018 ◽  
Vol 56 ◽  
pp. 130-135
Author(s):  
M. V. Tserenyuk ◽  
O. M. Tserenyuk

In addition to assessing the absolute performance of animals, the impact of certain technological approaches, breeding influence and other external organized factors on the consolidation of pigs groups on certain grounds should be evaluated. Breeding consolidation is a desirable breeding process that is realized through more motivated consolidation of intra-breeding structural units for maintaining a significant level of intergroup differentiation and variability. If the issue relates to the rearing young animals that are introduced into the herd, this is the most urgent issue that needs to be monitored. The purpose of the research is to determine the consociality of the main signs of reproductive ability of the gilts, which are checked at different rates of artificial insemination of sows of the main herd. The research was carried out at the Shubs`ke farm in the Bogoduhivsky District of Kharkiv Oblast at the pig farm for the pure breeding of the Welsh breed pigs (breeding reproductive status from 2014). In order to evaluate the optimal multiplicity (from single to triple), a gilts with a total of 30 heads was selected. The main indicators of reproductive ability of gilts (multiplicity, weight of nest at birth and excommunication and keeping piglets before weaning on day 28) were evaluated. Recalculation on the weight of the litter at weaning at the weight of the nest at extermination in 60 days was carried out in accordance with the current instruction on the boning of pigs. The results of researches were worked out according to traditional methods by the method of variation statistics. Consolidation of individual indicators of reproductive capacity of gilts was calculated to the total number of evaluated animals. In the previous stage of the research, it was found that the use of multiple insemination of gilts compared to single insemination is positively reflected at the level of their reproductive ability. A decrease in the percentage of non-productive inseminants in animal groups has been established. Triple insemination of gilts allows for 1.14 piglets to be obtained more compared to single insemination of the same pigs (p < 0.01). Also, for three times insemination of gilts, in comparison with single insemination, larger nest for weaning were obtained (by 14.24 kg, converted to 60th day p < 0.01). At the same time, the reduction in the keeping of piglets to weaning for increase frequency of insemination of gilts. According to the multiplicity of gilts, the most consolidated level of symptoms per group was obtained for single insemination. Increasing the multiplicity of insemination negatively affected the consolidation of the multiplicity of the piglets (as for determining the coefficients of phenotypic consolidation due to the mean square deviation, so, and for determining the coefficient of variability). The least consolidated in terms of multiplicity were uterus for double insemination. It should also be noted that there is little difference, in general, on the indicators of the theonotypical consolidation of the multiplicity of piglets between different groups of animals. So this difference between the most contrasting groups was 0.447 points for determining the coefficients due to the mean square deviation and 0.397 points for the determination due to the coefficient of variability, respectively. By weight of the same litters at birth, the most consolidated group was gilts, which was inseminated twice. The least consolidated are trimmed three times, respectively. The difference as a whole, according to the indicators of the theonotype consolidation of the mass of the litter’s piglets at birth, between different groups of pigs was even smaller. The difference between the most contrasting groups was 0.270 points for determining the coefficients due to the mean square deviation and 0.260 points for the determination due to the coefficient of variability, respectively. On the basis of the mass of the litter at weaning, the most consolidated level of symptoms was obtained in the group of gilts for triple insemination. The least consolidated group, respectively, was gilts for single insemination. At the same time, on the basis of the weight of the litter at weaning, the slightest differences were obtained after the consolidation of the signs between different groups of pigs. The difference between the most contrasting groups was 0.173 points for determining the coefficients due to the mean square deviation and 0.248 points for the determination due to the coefficient of variation, respectively. At different rates of insemination of gilts, there are no significant differences between the groups between the groups in terms of the consolidation of signs of reproductive ability. Also, there was no clear effect on increasing the multiplicity of insemination of gilts to the consolidation of their signs of reproductive qualities.


2014 ◽  
Vol 14 (1) ◽  
Author(s):  
Hung-Jen Tang ◽  
Hsin-Lan Lin ◽  
Yu-Hsiu Lin ◽  
Pak-On Leung ◽  
Yin-Ching Chuang ◽  
...  

Author(s):  
Juhaina Abdulrahiem ◽  
Asia Sultan ◽  
Faisl Alaklobi ◽  
Hala Amer ◽  
Hind Alzoman

Central Line Associated Bloodstream Infection (CLABSI) is a type of bloodstream infection that is caused by microorganisms after the insertion of central lines. Paediatric Intensive Care Units have been studied to conduct this research on CLABSI in children from 2 to 15 years old. Children have been divided in two age groups that are 2-5 and 5-15 years. The Royal Children’s Hospital, Melbourne has been chosen as a sample of this besides other five hospitals of Australia. A total of 350 patients are studied in the course of this research and 216 among them were inserted with central lines. Bloodstream infection has been identified in 49 patients from these 216 patients and CLABSI occurred in 75.51% of them that is 37 patients. Associated microorganisms and other underlying diseases are listed in this study to develop an idea about factors responsible for CLABSI.


Author(s):  
Patricio K. Chap-as

This research undertaking aimed to find out the impact of instructional technology in teaching Music. Specifically, it was framed from the following context: 1. What is the mean performance of the controlled group and experimental group after the instructional technology was used in teaching Music?;2. What is the difference in the mean performance of the control group and experimental group after instructional technology was used in Teaching music? ;The researcher employed the experimental research method specifically the between-group design. From the results of the statistical computations, the following were revealed: The experimental group visibly performed better after instructional technology was integrated in teaching music than their control group counterpart. Findings of the study revealed there was a statistically significant differences in the academic achievement between the average mark of the experimental group students and the average mark of the control group students in favor of the experimental group.  


Sign in / Sign up

Export Citation Format

Share Document