scholarly journals The REDS score: a new scoring system to risk-stratify emergency department suspected sepsis: a derivation and validation study

BMJ Open ◽  
2019 ◽  
Vol 9 (8) ◽  
pp. e030922 ◽  
Author(s):  
Narani Sivayoham ◽  
Lesley A Blake ◽  
Shafi E Tharimoopantavida ◽  
Saad Chughtai ◽  
Adil N Hussain ◽  
...  

ObjectiveTo derive and validate a new clinical prediction rule to risk-stratify emergency department (ED) patients admitted with suspected sepsis.DesignRetrospective prognostic study of prospectively collected data.SettingED.ParticipantsPatients aged ≥18 years who met two Systemic Inflammatory Response Syndrome criteria or one Red Flag sepsis criteria on arrival, received intravenous antibiotics for a suspected infection and admitted.Primary outcome measureIn-hospital all-cause mortality.MethodThe data were divided into derivation and validation cohorts. The simplified-Mortality in Severe Sepsis in the ED score and quick-SOFA scores, refractory hypotension and lactate were collectively termed ‘component scores’ and cumulatively termed the ‘Risk-stratification of ED suspected Sepsis (REDS) score’. Each patient in the derivation cohort received a score (0–3) for each component score. The REDS score ranged from 0 to 12. The component scores were subject to univariate and multivariate logistic regression analyses. The receiver operator characteristic (ROC) curves for the REDS and the components scores were constructed and their cut-off points identified. Scores above the cut-off points were deemed high-risk. The area under the ROC (AUROC) curves and sensitivity for mortality of the high-risk category of the REDS score and component scores were compared. The REDS score was internally validated.Results2115 patients of whom 282 (13.3%) died in hospital. Derivation cohort: 1078 patients with 140 deaths (13%). The AUROC curve with 95% CI, cut-off point and sensitivity for mortality (95% CI) of the high-risk category of the REDS score were: derivation: 0.78 (0.75 to 0.80); ≥3; 85.0 (78 to 90.5). Validation: 0.74 (0.71 to 0.76); ≥3; 84.5 (77.5 to 90.0). The AUROC curve and the sensitivity for mortality of the REDS score was better than that of the component scores. Specificity and mortality rates for REDS scores of ≥3, ≥5 and ≥7 were 54.8%, 88.8% and 96.9% and 21.8%, 36.0% and 49.1%, respectively.ConclusionThe REDS score is a simple and objective score to risk-stratify ED patients with suspected sepsis.

CJEM ◽  
2017 ◽  
Vol 19 (S1) ◽  
pp. S58
Author(s):  
K. Votova ◽  
M. Bibok ◽  
R. Balshaw ◽  
M. Penn ◽  
M.L. Lesperance ◽  
...  

Introduction: Canadian stroke best practice guidelines recommend patients suspected of Acute Cerebrovascular Syndrome (ACVS) receive urgent brain imaging, preferably CTA. Yet, high requisition rates for non-ACVS patients overburdens limited radiological resources. We hypothesize that our clinical prediction rule (CPR) previously developed for diagnosis of ACVS in the emergency department (ED), and which incorporates Canadian guidelines, could improve CTA utilization. Methods: Our data consists of records for 1978 ED-referred patients to our TIA clinic in Victoria, BC from 2015-2016. Clinic referral forms captured all data needed for the CPR. For patients who received CTA, orders were placed in the ED or at the TIA clinic upon arrival. We use McNemar’s test to compare the sensitivity (sens) and specificity (spec) of our CPR vs. the baseline CTA orders for identifying ACVS. Results: Our sample (49.5% male, 60.6% ACVS) has a mean age of 70.9±13.6 yrs. Clinicians ordered 1190 CTAs (baseline) for these patients (60%). Where CTA was ordered, 65% of patients (n=768) were diagnosed as ACVS. To evaluate our CPR, predicted probabilities of ACVS were computed using the ED referral data. Those patients with probabilities greater than the decision threshold and presenting with at least one focal neurological deficit clinically symptomatic of ACVS were flagged as would have received a CTA. Our CPR would have ordered 1208 CTAs (vs. 1190 baseline). Where CTA would have been ordered, 74% of patients (n=893) had an ACVS diagnosis. This is a significantly improved performance over baseline (sens 74.5% vs. 64.1%, p<0.001; spec 59.6% vs. 45.9%, p<0.001). Specifically, the CPR would have ordered an additional 18 CTAs over the 2-yr period, while simultaneously increasing the number of imaged-ACVS patients by 125 with imaging 107 fewer non-ACVS patients. Conclusion: Using ED physician referral data, our CPR demonstrates significantly higher sensitivity and specificity for CTA imaging of ACVS patients than baseline CTA utilization. Moreover, our CPR would assist ED physicians to apply and practice the Canadian stroke best practice guidelines. ED physician use of our CPR would increase the number of ACVS patients receiving CTA imaging before ED discharge (rather than later at TIA clinics), and ultimately reduce the burden of false-positives on radiological departments.


1979 ◽  
Vol 19 (3) ◽  
pp. 180-185 ◽  
Author(s):  
N. G. Flanagan ◽  
G. K. Lochridge ◽  
J.G. Henry ◽  
A. J. Hadlow ◽  
P. A. Hamer

A field study was carried out using 131 volunteers in an attempt to relate alcohol consumption at 12 social functions with actual blood alcohol levels under reasonably controlled conditions. Food, taken at 7 of these functions, caused an unpredictable delay in alcohol absorption and some subjects had blood alcohol figures approaching recently defined ‘high risk’ levels. Better correlation was found at those functions without food intake, but again there was considerable individual variation. In 36 subjects samples were taken on the following morning. About 12 per cent showed significantly raised levels but all were under the legal limit for driving. The authors are concerned that other factors in addition to the alcohol level should be considered before a driver is placed in the ‘high risk’ category.


Blood ◽  
2004 ◽  
Vol 104 (11) ◽  
pp. 1052-1052
Author(s):  
Carolyn J. Owen ◽  
Steve Doucette ◽  
Philip S. Wells

Abstract Background: The diagnosis of DVT can be made by determining pretest probability of disease and using this information in combination with DD testing and ultrasound imaging. A number of studies have evaluated the use of clinical probability but this literature has not been summarized. Purpose: To systematically review trials that evaluated DVT prevalence using clinical prediction rules either with or without DD for the diagnosis of DVT. Data Sources: English and French language studies were identified from a MEDLINE search from 1990 to March 2004 and were supplemented by a review of all relevant bibliographies. Study Selection: Prospective management studies of symptomatic outpatients with suspected DVT in which patients were followed for a minimum of 3 months were selected. Clinical prediction rules had to be employed prior to DD and diagnostic tests. Studies were excluded if patients with a history of prior DVT were enrolled or if insufficient information was presented to calculate the prevalence of DVT for each of the 3 clinical probability estimates (low, moderate and high risk). Data Extraction: Two reviewers assessed each study for inclusion/exclusion criteria and collected data on prevalence and on sensitivity, specificity and likelihood ratios of DD in each of the 3 clinical probability estimates (low, moderate and high risk). Data Synthesis: 14 management studies involving a clinical prediction model in the diagnosis of DVT in over 8000 patients were included, of which 11 utilized DD in the diagnostic algorithm. All studies employed the same clinical prediction rule. The inverse variance weighted average prevalence of DVT in the low, moderate and high probability subgroups were 4.9% (95% CI= 4.2% to 5.7%), 17.4% (95% CI= 16.2% to 18.8%), and 53.6% (95% CI= 51.1% to 56.2%), respectively. The overall weighted prevalence was 18.3% (95% CI= 17.4% to 19.2%). The sensitivity of DD for the diagnosis of DVT in the low, moderate and high probability subgroups were 90.4% (95% CI= 84.7% to 94.2%), 92.0 % (95% CI= 89.1% to 94.2%), 93.6% (95% CI= 91.2% to 94.3%); and the specificities were 69.9% (95% CI= 68.0% to 71.8%), 52.4% (95% CI= 49.8% to 55.0%), and 43.2% (95% CI= 38.8% to 47.6%), respectively. The Mantel-Haenszel pooled estimates for diagnostic odds ratios (DOR) were 17.4 (95%CI=10.4–29.1), 10.2 (95% CI=7.1–14.6), and 10.1 (95% CI=6.9–14.9) in low, moderate and high groups respectively. Conclusion: Accurate estimates of the prevalence of DVT can be achieved using the same clinical prediction rule. Using this rule, it is unlikely that low probability patients have a DVT probability of more than 5%. Specificity of the DD seems to have clinically relevant differences depending on pretest probability but the DORs (which incorporate sensitivity and specificity) are similar. The data suggest that DVT can be excluded if patients are low probability even when DDs of lower sensitivity are employed and that DD testing has lower utility in high probability patients since false positives are common.


Blood ◽  
2012 ◽  
Vol 120 (21) ◽  
pp. 394-394
Author(s):  
Martha L Louzada ◽  
Gauruv Bose ◽  
Andrew Cheung ◽  
Benjamin H Chin-Yee ◽  
Simon Wells ◽  
...  

Abstract Abstract 394 Background: Long-term low molecular weight heparin (LMWH) is the current standard for treatment of venous thromboembolism (VTE) in cancer patients. Whether treatment strategies should vary according to individual risk of VTE recurrence remains unknown. We have derived a clinical prediction rule that stratifies VTE recurrence risk in patients with cancer-associated VTE. The derivation model includes 4 independent predictors (sex, primary tumor site, stage and prior VTE). The score sum ranges between −3 and +3 points. Patients with a score ≤ 0 had low risk (≤4.5%) for recurrence and patients with a score above 1 had a high risk (≥ 19%) for VTE recurrence. Subsequently, we applied and validated the rule in an independent set of 819 patients from 2 randomized controlled trials comparing LMWH to warfarin for VTE treatment in cancer patients. In the current study we aim to externally validate our clinical prediction rule with an independent population of patients with cancer-associated VTE followed at the Thrombosis clinics of two tertiary Canadian centres. Methods: We conducted a retrospective cohort study of patients with cancer and VTE diagnosed and/or followed at the Thrombosis Clinic of the Victoria Hospital (London, Canada) from January 2006 to December 2010; and the Thrombosis Unit of the Ottawa Hospital (Ottawa, Canada) from January 2009 to December 2011. We included data from adult patients with active malignancy and objectively diagnosed acute pulmonary embolism (PE) or deep venous thrombosis (DVT) of the lower extremity (above knee), upper extremity and neck veins, or unusual site thrombosis. The primary outcome measure was VTE recurrence during the first six months of anticoagulation. Results: 353 patients fulfilled our inclusion criteria and were included in the study. There were 149 males, and the overall population had a median age of 64 years (range: 18 – 95). One hundred and twenty-three patients had lower extremity DVT, 93 had PE and 57 had both. The remaining 80 patients had either upper extremity/neck DVT (n = 55) or unusual site thrombosis (n = 25). 77 patients had a prior history of VTE. The most common primary tumour site was gastrointestinal, followed by the lung. Of the 304 patients with solid tumours, 230 (75.7%%) had TNM greater than I. Two hundred and ninety-three (83.0%) patients were treated with longterm low molecular weight heparin (LMWH) only and 60 (17.0%) with warfarin (VKA). VTE recurrence occurred in 44 of 353 patients (12.4%). When we evaluated VTE recurrence risk per site, there was no significant difference: London 13 of 90 and Ottawa 31 of 263 [RR=1.23 (95%CI= 0.671 – 2.237; p=0. 507)]. In addition, there was no significant benefit with the use of LMWH (37 of 293) over VKA (7 of 60) in the risk of recurrence [RR=0.92 (95%CI= 0.433 – 1.973; p= 0.8379)]. When we applied our clinical prediction rule (Table 1) in the entire study population, recurrent VTE occurred in 12 of 204 (5.8%) patients stratified as low risk probability and in 32 of 149 (21.4%) patients stratified as high risk probability (Table 2). Conclusions: Our prediction rule has been adequately validated to now be used in prospective trials of treatment. Future trials evaluating novel treatment strategies for high risk patients are warranted. Disclosures: No relevant conflicts of interest to declare.


2012 ◽  
Vol 30 (15_suppl) ◽  
pp. 6098-6098
Author(s):  
Winston Wong ◽  
Joseph Cooper ◽  
Steve Richardson ◽  
Bruce A. Feinberg

6098 Background: CareFirst BlueCross BlueShield (CFBCBS) insurance network partnered with Cardinal Health Specialty Solutions (CHSS) to develop a cancer care pathway for network physicians in 2008. The program included a recommendation for molecular diagnostic testing with the Oncotype DX assay for pts with early-stage estrogen receptor-positive breast cancer. Based on NCCN guidelines, the pathway suggested adjuvant chemotherapy for all pts with Oncotype DX Recurrence Scores (RS) in the high-risk category. We aimed to determine the RS risk distribution among pts who received Oncotype DX testing and assess the patterns of care that followed. Methods: Using data from CFBCBS, CHSS proprietary claims software, and Genomic Health, we retrospectively identified a cohort of women with breast cancer who were treated on the CFBCBS clinical care pathways program from 8/2008 to 6/2011 and received Oncotype DX testing. We determined the number of pts with a RS value in the low- (RS <18), intermediate- (RS 18-30), and high-risk (RS ≥31) groups along with the number of pts who subsequently received chemotherapy in each category. Results: Of 1174 women who received Oncotype DX testing, 53% of pts were in the low-, 35% in intermediate-, and 12% in the high-risk groups. Five percent of low-, 41% of intermediate-, and 74% percent of pts in the high-risk category were treated with chemotherapy. Twenty-six percent of pts in the high-risk group did not receive chemotherapy. Conclusions: The proportionate use of chemotherapy in the low and intermediate risk groups was as expected based on adjuvant chemotherapy guidelines; however, the underuse of chemotherapy in 26% of high-risk pts was an unexpected finding. Further study is needed to determine: (1) why physicians avoided chemotherapy in 26% of high-risk pts; (2) the overall number of appropriate pts who underwent Oncotype DX testing; and, (3) the tumor characteristics that may have driven the underutilization of chemotherapy in the high-risk population.


Author(s):  
Elisa Pizzolato ◽  
Marco Ulla ◽  
Claudia Galluzzo ◽  
Manuela Lucchiari ◽  
Tilde Manetta ◽  
...  

AbstractSepsis, severe sepsis and septic shock are among the most common conditions handled in the emergency department (ED). According to new Sepsis Guidelines, early diagnosis and treatment are the keys to improve survival. Plasma C-reactive protein (CRP) and procalcitonin (PCT) levels, when associated with documented or suspected infection, are now part of the definitions of sepsis. Blood culture is the gold standard method for detecting microorganisms but it requires too much time for results to be known. Sensitive biomarkers are required for early diagnosis and as indexes of prognosis sepsis. CRP is one of the acute phase proteins synthesized by the liver: it has a great sensitivity but a very poor specificity for bacterial infections. Moreover, the evolution of sepsis does not correlate with CRP plasma changes. In recent years PCT has been widely used for sepsis differential diagnosis, because of its close correlation with infections, but it still retains some limitations and false positivity (such as in multiple trauma and burns). Soluble CD14 subtype (sCD14-ST), also known as presepsin, is a novel and promising biomarker that has been shown to increase significantly in patients with sepsis, in comparison to the healthy population. Studies pointed out the capability of this biomarker for diagnosing sepsis, assessing the severity of the disease and providing a prognostic evaluation of patient outcome. In this mini review we mainly focused on presepsin: we evaluate its diagnostic and prognostic roles in patients presenting to the ED with systemic inflammatory response syndrome (SIRS), suspected sepsis or septic shock.


Author(s):  
Basavaraj S. Mannapur ◽  
Bhagyalaxmi S. Sidenur ◽  
Ashok S. Dorle

Background: Diabetes is considered as a global emergency where a person dies from diabetes every 6 seconds and diabetes is seen on 1 in 11 adults. Identification of individuals who are at risk is very much necessary to prevent diabetes in India. IDRS could also help to detect people at risk of having prediabetes. The objective of the study were to estimate the prevalence of diabetes mellitus in the age group of >20 years in urban field practice area of S.N. Medical college, Bagalkot and to identify high risk subjects using Indian diabetes risk score (IDRS).Methods: A cross sectional study was done in urban field practice area of S.N. Medical College among adults >20 years of age with sample size of 207. Systematic random sampling was used to select the subjects. Data was collected using standardised questionnaire which included socio-demographic profile, standard glucometer was used to measure random blood glucose for all participants. IDRS was used to ascertain the risk of developing diabetes. Data was analysed using Pearson’s Chi square test and Fischer exact.Results: The overall prevalence of diabetes was 14.1%. Among 206 subjects, 4.8% were in low risk category. 39.6% and 55.1% were in moderate and high risk category respectively. Total of 11 subjects were newly diagnosed in our study. Among them 10 subjects were in the high risk category and 1 was in the low risk category. Sensitivity of IDRS was 90%, specificity 50%, positive predictive value 43.8% and negative predictive value 96.74%..Conclusions: This study estimates the usefulness of simplified Indian diabetes risk score for identifying high risk diabetic subjects in the community. It can be used routinely in commu­nity-based screening to find out high risk people for diabetes so that proper intervention can be done to reduce the burden of the disease. 


2020 ◽  
Author(s):  
Rabinder Kumar Prasad ◽  
Rosy Sarmah ◽  
Subrata Chakraborty

Abstract The novel Coronavirus (COVID-19) incidence in India is currently experiencing exponential rise with apparent spatial variation in growth rate and doubling time. We classify the states into five clusters with low to high-risk category and identify how the different states moved from one cluster to the other since the onset of the first case on $30^{th}$ January 2020 till the end of $15^{th}$ September 2020. We cluster the Indian states into $5$ groups using incrementalKMN clustering \cite{b1}. We observed and comment on the changing scenario of the formation of the clusters starting from before lockdown, through lockdown and the various unlock phases.


Sign in / Sign up

Export Citation Format

Share Document