Risk Adjustment Methodologies

2018 ◽  
pp. 131-151
Author(s):  
Zach Pennington ◽  
Corinna C. Zygourakis ◽  
Christopher P. Ames
Keyword(s):  
2020 ◽  
Vol 41 (S1) ◽  
pp. s116-s118
Author(s):  
Qunna Li ◽  
Andrea Benin ◽  
Alice Guh ◽  
Margaret A. Dudeck ◽  
Katherine Allen-Bridson ◽  
...  

Background: The NHSN has used positive laboratory tests for surveillance of Clostridioides difficile infection (CDI) LabID events since 2009. Typically, CDIs are detected using enzyme immunoassays (EIAs), nucleic acid amplification tests (NAATs), or various test combinations. The NHSN uses a risk-adjusted, standardized infection ratio (SIR) to assess healthcare facility-onset (HO) CDI. Despite including test type in the risk adjustment, some hospital personnel and other stakeholders are concerned that NAAT use is associated with higher SIRs than are EIAs. To investigate this issue, we analyzed NHSN data from acute-care hospitals for July 1, 2017 through June 30, 2018. Methods: Calendar quarters for which CDI test type was reported as NAAT (includes NAAT, glutamate dehydrogenase (GDH)+NAAT and GDH+EIA followed by NAAT if discrepant) or EIA (includes EIA and GDH+EIA) were selected. HO CDI SIRs were calculated for facility-wide inpatient locations. We conducted the following analyses: (1) Among hospitals that did not switch their test type, we compared the distribution of HO incident rates and SIRs by those reporting NAAT vs EIA. (2) Among hospitals that switched their test type, we selected quarters with a stable switch pattern of 2 consecutive quarters of each of EIA and NAAT (categorized as pattern EIA-to-NAAT or NAAT-to-EIA). Pooled semiannual SIRs for EIA and NAAT were calculated, and a paired t test was used to evaluate the difference of SIRs by switch pattern. Results: Most hospitals did not switch test types (3,242, 89%), and 2,872 (89%) reported sufficient data to calculate SIRs, with 2,444 (85%) using NAAT. The crude pooled HO CDI incidence rates for hospitals using EIA clustered at the lower end of the histogram versus rates for NAAT (Fig. 1). The SIR distributions of both NAAT and EIA overlapped substantially and covered a similar range of SIR values (Fig. 1). Among hospitals with a switch pattern, hospitals were equally likely to have an increase or decrease in their SIR (Fig. 2). The mean SIR difference for the 42 hospitals switching from EIA to NAAT was 0.048 (95% CI, −0.189 to 0.284; P = .688). The mean SIR difference for the 26 hospitals switching from NAAT to EIA was 0.162 (95% CI, −0.048 to 0.371; P = .124). Conclusions: The pattern of SIR distributions of both NAAT and EIA substantiate the soundness of NHSN risk adjustment for CDI test types. Switching test type did not produce a consistent directional pattern in SIR that was statistically significant.Disclosures: NoneFunding: None


2020 ◽  
Vol 41 (S1) ◽  
pp. s40-s40
Author(s):  
Hsiu Wu ◽  
Tyler Kratzer ◽  
Liang Zhou ◽  
Minn Soe ◽  
Jonathan Edwards ◽  
...  

Background: To provide a standardized, risk-adjusted method for summarizing antimicrobial use (AU), the Centers for Disease Control and Prevention developed the standardized antimicrobial administration ratio, an observed-to-predicted use ratio in which predicted use is estimated from a statistical model accounting for patient locations and hospital characteristics. The infection burden, which could drive AU, was not available for assessment. To inform AU risk adjustment, we evaluated the relationship between the burden of drug-resistant gram-positive infections and the use of anti-MRSA agents. Methods: We analyzed data from acute-care hospitals that reported ≥10 months of hospital-wide AU and microbiologic data to the National Healthcare Safety Network (NHSN) from January 2018 through June 2019. Hospital infection burden was estimated using the prevalence of deduplicated positive cultures per 1,000 admissions. Eligible cultures included blood and lower respiratory specimens that yielded oxacillin/cefoxitin–resistant Staphylococcus aureus (SA) and ampicillin-nonsusceptible enterococci, and cerebrospinal fluid that yielded SA. The anti-MRSA use rate is the total antimicrobial days of ceftaroline, dalbavancin, daptomycin, linezolid, oritavancin, quinupristin/dalfopristin, tedizolid, telavancin, and intravenous vancomycin per 1,000 days patients were present. AU rates were modeled using negative binomial regression assessing its association with infection burden and hospital characteristics. Results: Among 182 hospitals, the median (interquartile range, IQR) of anti-MRSA use rate was 86.3 (59.9–105.0), and the median (IQR) prevalence of drug-resistant gram-positive infections was 3.4 (2.1–4.8). Higher prevalence of drug-resistant gram-positive infections was associated with higher use of anti-MRSA agents after adjusting for facility type and percentage of beds in intensive care units (Table 1). Number of hospital beds, average length of stay, and medical school affiliation were nonsignificant. Conclusions: Prevalence of drug-resistant gram-positive infections was independently associated with the use of anti-MRSA agents. Infection burden should be used for risk adjustment in predicting the use of anti-MRSA agents. To make this possible, we recommend that hospitals reporting to NHSN’s AU Option also report microbiologic culture results.Funding: NoneDisclosures: None


2020 ◽  
Vol 4 (Supplement_1) ◽  
pp. 777-777
Author(s):  
Qian-Li Xue ◽  
Kristine Ensrud ◽  
Shari Lin

Abstract As population aging is accelerating rapidly, there is growing concern on how to best provide patient-centered care for the most vulnerable. Establishing a predictable and affordable cost structure for healthcare services is key to improving quality, accessibility, and affordability. One such effort is the “frailty” adjustment model implemented by the Centers for Medicare & Medicaid Services (CMS) that adjusts payments to a Medicare managed care organization based on functional impairment of its beneficiaries. Earlier studies demonstrated added value of this frailty adjuster for prediction of Medicare expenditures independent of the diagnosis-based risk adjustment. However, we hypothesize that further improvement is possible by implementing more rigorous frailty assessment rather than relying on self-report of ADL difficulties as used for the frailty adjuster. This is supported by the consensus and clinical observations that neither multimorbidity nor disability alone is sufficient for frailty identification. This symposium consists of four talks that leverage data from three CMS-linked cohort studies to investigate the utility of assessment of the frailty phenotype for predicting healthcare utilization and costs. Talk 1 and 2 use data from the NHATS cohort to assess healthcare utilization by frailty status in the general population and the homebound subset. Talk 3 and 4 use data from the MrOS study and the SOF study to investigate the impact of frailty phenotype on healthcare costs. Taken together, their findings highlight the potential of incorporating phenotypic frailty assessment into CMS risk adjustment to improve the planning and management of care for frail older adults.


Sign in / Sign up

Export Citation Format

Share Document