scholarly journals I-PASS Illness Severity Identifies Patients at Risk for Overnight Clinical Deterioration

2020 ◽  
Vol 12 (5) ◽  
pp. 578-582
Author(s):  
Chirayu Shah ◽  
Khaled Sanber ◽  
Rachael Jacobson ◽  
Bhavika Kaul ◽  
Sarah Tuthill ◽  
...  

ABSTRACT Background The I-PASS framework is increasingly being adopted for patient handoffs after a recent study reported a decrease in medical errors and preventable adverse events. A key component of the I-PASS handoff included assignment of illness severity. Objective We evaluated whether illness severity categories can identify patients at higher risk of overnight clinical deterioration as defined by activation of the rapid response team (RRT). Methods The I-PASS handoff documentation created by internal medicine residents and patient charts with overnight RRT activations from April 2016 through March 2017 were reviewed retrospectively. The RRT activations, illness severity categories, vital signs prior to resident handoff, and patient outcomes were evaluated. Results Of the 28 235 written patient handoffs reviewed, 1.3% were categorized as star (sickest patients at risk for higher level of care), 18.8% as watcher (unsure of illness trajectory), and 79.9% as stable (improving clinical status). Of the 98 RRT activations meeting the inclusion criteria, 5.1% were labeled as star, 35.7% as watcher, and 59.2% as stable. Patients listed as watcher had an odds ratio of 2.6 (95% confidence interval 1.7–3.9), and patients listed as star had an odds ratio of 5.2 (95% confidence interval 2.1–13.1) of an overnight RRT activation compared with patients listed as stable. The overall in-hospital mortality of patients with an overnight RRT was 29.6%. Conclusions The illness severity component of the I-PASS handoff can identify patients at higher risk of overnight clinical deterioration and has the potential to help the overnight residents prioritize patient care.

2021 ◽  
pp. 1357633X2110394
Author(s):  
Arno Joachim Gingele ◽  
Lloyd Brandts ◽  
Kjeld Vossen ◽  
Christian Knackstedt ◽  
Josiane Boyne ◽  
...  

Introduction Heart failure is a serious burden on health care systems due to frequent hospital admissions. Early recognition of outpatients at risk for clinical deterioration could prevent hospitalization. Still, the role of signs and symptoms in monitoring heart failure patients is not clear. The heart failure coach is a web-based telemonitoring application consisting of a 9-item questionnaire assessment of heart failure signs and symptoms and developed to identify outpatients at risk for clinical deterioration. If deterioration was suspected, patients were contacted by a heart failure nurse for further evaluation. Methods Heart failure coach questionnaires completed between 2015 and 2018 were collected from 287 patients, completing 18,176 questionnaires. Adverse events were defined as all-cause mortality, heart failure- or cardiac-related hospital admission or emergency cardiac care visits within 30 days after completion of each questionnaire. Multilevel logistic regression analyses were performed to assess the association between the heart failure coach questionnaire items and the odds of an adverse event. Results No association between dyspnea and adverse events was observed (odds ratio 1.02, 95% confidence interval 0.79–1.30). Peripheral edema (odds ratio 2.21, 95% confidence interval 1.58–3.11), persistent chest pain (odds 2.06, 95% confidence interval 1.19–3.58), anxiety about heart failure (odds ratio 2.12, 95% confidence interval 1.44–3.13), and extensive struggle to perform daily activities (odds ratio 2.23, 95% confidence interval 1.38–3.62) were significantly associated with adverse outcome. Discussion Regular assessment of more than the classical signs and symptoms may be helpful to identify heart failure patients at risk for clinical deterioration and should be an integrated part of heart failure telemonitoring programs.


Author(s):  
Thang S Han ◽  
David Fluck ◽  
Christopher H Fry

AbstractThe LACE index scoring tool has been designed to predict hospital readmissions in adults. We aimed to evaluate the ability of the LACE index to identify children at risk of frequent readmissions. We analysed data from alive-discharge episodes (1 April 2017 to 31 March 2019) for 6546 males and 5875 females from birth to 18 years. The LACE index predicted frequent all-cause readmissions within 28 days of hospital discharge with high accuracy: the area under the curve = 86.9% (95% confidence interval = 84.3–89.5%, p < 0.001). Two-graph receiver operating characteristic curve analysis revealed the LACE index cutoff to be 4.3, where sensitivity equals specificity, to predict frequent readmissions. Compared with those with a LACE index score = 0–4 (event rates, 0.3%), those with a score > 4 (event rates, 3.7%) were at increased risk of frequent readmissions: age- and sex-adjusted odds ratio = 12.4 (95% confidence interval = 8.0–19.2, p < 0.001) and death within 30 days of discharge: OR = 5.0 (95% CI = 1.5–16.7). The ORs for frequent readmissions were between 6 and 14 for children of different age categories (neonate, infant, young child and adolescent), except for patients in the child category (6–12 years) where odds ratio was 2.8.Conclusion: The LACE index can be used in healthcare services to identify children at risk of frequent readmissions. Focus should be directed at individuals with a LACE index score above 4 to help reduce risk of readmissions. What is Known:• The LACE index scoring tool has been widely used to predict hospital readmissions in adults. What is New:• Compared with children with a LACE index score of 0–4 (event rates, 0.3%), those with a score > 4 are at increased risk of frequent readmissions by 14-fold.• The cutoff of a LACE index of 4 may be a useful level to identify children at increased risk of frequent readmissions.


2018 ◽  
Vol 25 (3) ◽  
pp. 137-145
Author(s):  
Marina Lee ◽  
David McD Taylor ◽  
Antony Ugoni

Introduction: To determine the association between both abnormal individual vital signs and abnormal vital sign groups in the emergency department, and undesirable patient outcomes: hospital admission, medical emergency team calls and death. Method: We undertook a prospective cohort study in a tertiary referral emergency department (February–May 2015). Vital signs were collected prospectively in the emergency department and undesirable outcomes from the medical records. The primary outcomes were undesirable outcomes for individual vital signs (multivariate logistic regression) and vital sign groups (univariate analyses). Results: Data from 1438 patients were analysed. Admission was associated with tachycardia, tachypnoea, fever, ≥1 abnormal vital sign on admission to the emergency department, ≥1 abnormal vital sign at any time in the emergency department, a persistently abnormal vital sign, and vital signs consistent with both sepsis (tachycardia/hypotension/abnormal temperature) and pneumonia (tachypnoea/fever) (p < 0.05). Medical emergency team calls were associated with tachycardia, tachypnoea, ≥1 abnormal vital sign on admission (odds ratio: 2.3, 95% confidence interval: 1.4–3.8), ≥2 abnormal vital signs at any time (odds ratio: 2.4, 95% confidence interval: 1.2–4.7), and a persistently abnormal vital sign (odds ratio: 2.7, 95% confidence interval: 1.6–4.6). Death was associated with Glasgow Coma Score ≤13 (odds ratio: 6.3, 95% confidence interval: 2.5–16.0), ≥1 abnormal vital sign on admission (odds ratio: 2.6, 95% confidence interval: 1.2–5.6), ≥2 abnormal vital signs at any time (odds ratio: 6.4, 95% confidence interval: 1.4–29.5), a persistently abnormal vital sign (odds ratio: 4.3, 95% confidence interval: 2.0–9.0), and vital signs consistent with pneumonia (odds ratio: 5.3, 95% confidence interval: 1.9–14.8). Conclusion: Abnormal vital sign groups are generally superior to individual vital signs in predicting undesirable outcomes. They could inform best practice management, emergency department disposition, and communication with the patient and family.


2020 ◽  
Vol 9 (2) ◽  
pp. 343 ◽  
Author(s):  
Arash Kia ◽  
Prem Timsina ◽  
Himanshu N. Joshi ◽  
Eyal Klang ◽  
Rohit R. Gupta ◽  
...  

Early detection of patients at risk for clinical deterioration is crucial for timely intervention. Traditional detection systems rely on a limited set of variables and are unable to predict the time of decline. We describe a machine learning model called MEWS++ that enables the identification of patients at risk of escalation of care or death six hours prior to the event. A retrospective single-center cohort study was conducted from July 2011 to July 2017 of adult (age > 18) inpatients excluding psychiatric, parturient, and hospice patients. Three machine learning models were trained and tested: random forest (RF), linear support vector machine, and logistic regression. We compared the models’ performance to the traditional Modified Early Warning Score (MEWS) using sensitivity, specificity, and Area Under the Curve for Receiver Operating Characteristic (AUC-ROC) and Precision-Recall curves (AUC-PR). The primary outcome was escalation of care from a floor bed to an intensive care or step-down unit, or death, within 6 h. A total of 96,645 patients with 157,984 hospital encounters and 244,343 bed movements were included. Overall rate of escalation or death was 3.4%. The RF model had the best performance with sensitivity 81.6%, specificity 75.5%, AUC-ROC of 0.85, and AUC-PR of 0.37. Compared to traditional MEWS, sensitivity increased 37%, specificity increased 11%, and AUC-ROC increased 14%. This study found that using machine learning and readily available clinical data, clinical deterioration or death can be predicted 6 h prior to the event. The model we developed can warn of patient deterioration hours before the event, thus helping make timely clinical decisions.


2017 ◽  
Vol 83 (4) ◽  
pp. 403-413 ◽  
Author(s):  
C. Michael Dunham ◽  
Gregory S. Huang

We delineated the incidence of trauma patient pulmonary embolism (PE) and risk conditions by performing a systematic literature review of those at risk for deep vein thrombosis (DVT). The PE proportion was 1.4 per cent (95% confidence interval = 1.2–1.6) in at-risk patients. Of 10 conditions, PE was only associated with increased age (P < 0.01) or leg injury (P < 0.01; risk ratio = 1.6). As lower extremity DVT (LEDVT) proportions increased, mortality proportions (P = 0.02) and hospital stay (P = 0.0002) increased, but PE proportions did not (P = 0.13). LEDVT was lower with chemoprophylaxis (CP) (4.9%) than without CP (19.1%; P < 0.01). PEwas lower withCP (1.0%) than without CP (2.2%; P = 0.0004). Mortality was lower with CP (6.6%) than without CP (11.6%; P = 0.002). PE was similar with (1.2%) and without (1.9%; P = 0.19) mechanical prophylaxis (MP). LEDVT was lower with MP (8.5%) than without MP (12.2%; P = 0.0005). PE proportions were similar with (1.3%) and without (1.5%; P = 0.24) LEDVTsurveillance. Mortality was higher with LEDVTsurveillance (7.9%) than without (4.8%; P < 0.01). A PE mortality of 19.7 per cent (95% confidence interval = 18–22) 3 a 1.4 per cent PE proportion yielded a 0.28 per cent lethal PE proportion. As PE proportions increased, mortality (P = 0.52) and hospital stay (P = 0.13) did not. Of 176 patients with PE, 76 per cent had no LEDVT. In trauma patients at risk for DVT, PE is infrequent, has a minimal impact on outcomes, and death is a black swan event. LEDVTsurveillance did not improve outcomes. Because PE was not associated with LEDVT and most patients with PE had no LEDVT, preventing, diagnosing, and treating LEDVT may be ineffective PE prophylaxis.


CJEM ◽  
2017 ◽  
Vol 20 (2) ◽  
pp. 266-274 ◽  
Author(s):  
Steven Skitch ◽  
Benjamin Tam ◽  
Michael Xu ◽  
Laura McInnis ◽  
Anthony Vu ◽  
...  

ABSTRACTObjectivesEarly warning scores use vital signs to identify patients at risk of critical illness. The current study examines the Hamilton Early Warning Score (HEWS) at emergency department (ED) triage among patients who experienced a critical event during their hospitalization. HEWS was also evaluated as a predictor of sepsis.MethodsThe study population included admissions to two hospitals over a 6-month period. Cases experienced a critical event defined by unplanned intensive care unit admission, cardiopulmonary resuscitation, or death. Controls were randomly selected from the database in a 2-to-1 ratio to match cases on the burden of comorbid illness. Receiver operating characteristic (ROC) curves were used to evaluate HEWS as a predictor of the likelihood of critical deterioration and sepsis.ResultsThe sample included 845 patients, of whom 270 experienced a critical event; 89 patients were excluded because of missing vitals. An ROC analysis indicated that HEWS at ED triage had poor discriminative ability for predicting the likelihood of experiencing a critical event 0.62 (95% CI 0.58-0.66). HEWS had a fair discriminative ability for meeting criteria for sepsis 0.77 (95% CI 0.72-0.82) and good discriminative ability for predicting the occurrence of a critical event among septic patients 0.82 (95% CI 0.75-0.90).ConclusionThis study indicates that HEWS at ED triage has limited utility for identifying patients at risk of experiencing a critical event. However, HEWS may allow earlier identification of septic patients. Prospective studies are needed to further delineate the utility of the HEWS to identify septic patients in the ED.


Author(s):  
Heyman Luckraz ◽  
Ramesh Giri ◽  
Benjamin Wrigley ◽  
Kumaresan Nagarajan ◽  
Eshan Senanayake ◽  
...  

Abstract OBJECTIVES Our goal was to investigate the efficacy of balanced forced diuresis in reducing the rate of acute kidney injury (AKI) in cardiac surgical patients requiring cardiopulmonary bypass (CPB), using the RenalGuard® (RG) system. METHODS Patients at risk of developing AKI (history of diabetes and/or anaemia; estimated glomerular filtration rate 20–60 ml/min/1.73 m2; anticipated CPB time &gt;120 min; log EuroSCORE &gt; 5) were randomized to the RG system group (n = 110) or managed according to current practice (control = 110). The primary end point was the development of AKI within the first 3 postoperative days as defined by the RIFLE (Risk, Injury, Failure, Loss of kidney function, End-stage renal disease) criteria. RESULTS There were no significant differences in preoperative and intraoperative characteristics between the 2 groups. Postoperative AKI rates were significantly lower in the RG system group compared to the control group [10% (11/110) vs 20.9% (23/110); P = 0.025]. This effect persisted even after controlling for a number of potential confounders (odds ratio 2.82, 95% confidence interval 1.20–6.60; P = 0.017) when assessed by binary logistic regression analysis. The mean volumes of urine produced during surgery and within the first 24 h postoperatively were significantly higher in the RG system group (P &lt; 0.001). There were no significant differences in the incidence of blood transfusions, atrial fibrillation and infections and in the median duration of intensive care unit stays between the groups. The number needed to treat with the RG system to prevent AKI was 9 patients (95% confidence interval 6.0–19.2). CONCLUSIONS In patients at risk for AKI who had cardiac surgery with CPB, the RS RG system significantly reduced the incidence of AKI and can be used safely and reproducibly. Larger studies are required to confirm cost benefits. Clinical trial registration number: NCT02974946


2021 ◽  
Vol 42 (Supplement_1) ◽  
Author(s):  
C V Madsen ◽  
B Leerhoey ◽  
L Joergensen ◽  
C S Meyhoff ◽  
A Sajadieh ◽  
...  

Abstract Introduction Post-operative atrial fibrillation (POAF) is currently considered a phenomenon rather than a definite diagnosis. Nevertheless, POAF is associated with an increased rate of complications, including stroke and mortality. The incidence of POAF in acute abdominal surgery has not been reported and prediction of patients at risk has not previously been attempted. Purpose We aim to report the incidence of POAF after acute abdominal surgery and provide a POAF prediction model based on pre-surgery risk-factors. Methods Designed as a prospective, single-centre, cohort study of unselected adult patients referred for acute, general, abdominal surgery. Consecutive patients (&gt;16 years) were included during a three month period. No exclusion criteria were applied. Follow-up was based on chart reviews, including medical history, vital signs, blood samples and electrocardiograms. Chart reviews were performed prior to surgery, at discharge, and three months after surgery. Atrial fibrillation was diagnosed either by specialists in Cardiology or Anaesthesiology on ECG or cardiac rhythm monitoring (≥30 seconds duration). Multiple logistic regression with backward stepwise selection was used for model development. Receiver operating characteristic curves (ROC) including area under the curve (AUC) was produced. The study was approved by the Regional Ethics committee (H-19033464) and comply with the principles of the Declaration of Helsinki of the World Medical Association. Results In total, 466 patients were included. Mean (±SD) age was 51.2 (20.5), 194 (41.6%) were female, and cardiovascular comorbidity was present in ≈10% of patients. Overall incidence of POAF was 5.8% (27/466) and no cases were observed in patients &lt;60 years. Incidence was 15.7% (27/172) for patients ≥60 years. Prolonged hospitalization and death were observed in 40.7% of patients with POAF vs 8.4% patients without POAF (p&lt;0.001). Significant age-adjusted risk-factors were previous atrial fibrillation odds ratio (OR) 6.84 [2.73; 17.18] (p&lt;0.001), known diabetes mellitus OR 3.49 [1.40; 8.69] (p=0.007), and chronic kidney disease OR 3.03 [1.20; 7.65] (p=0.019). A prediction model, based on age, previous atrial fibrillation, diabetes mellitus and chronic kidney disease was produced (Figure 1), and ROC analysis displayed AUC 88.26% (Figure 2). Conclusions A simple risk-stratification model as the one provided, can aid clinicians in identifying those patients at risk of developing POAF in relation to acute abdominal surgery. This is important, as patients developing POAF are more likely to experience complications, such as prolonged hospitalization and death. Closer monitoring of heart rhythm and vital signs should be considered in at-risk patients older than 60 years. Model validation is warranted. FUNDunding Acknowledgement Type of funding sources: None.


Sign in / Sign up

Export Citation Format

Share Document