Feasibility of Withholding Antibiotics in Selected Febrile Neutropenic Cancer Patients

2005 ◽  
Vol 23 (30) ◽  
pp. 7437-7444 ◽  
Author(s):  
Claudi Oude Nijhuis ◽  
Willem A. Kamps ◽  
Simon M.G. Daenen ◽  
Jourik A. Gietema ◽  
Winette T.A. van der Graaf ◽  
...  

Purpose To investigate the feasibility of withholding antibiotics and early discharge for patients with chemotherapy-induced neutropenia and fever at low risk of bacterial infection by a new risk assessment model. Patients and Methods Outpatients with febrile neutropenia were allocated to one of three groups by a risk assessment model combining objective clinical parameters and plasma interleukin 8 level. Patients with signs of a bacterial infection and/or abnormal vital signs indicating sepsis were considered high risk. Based on their interleukin-8 level, remaining patients were allocated to low or medium risk for bacterial infection. Medium-risk and high-risk patients received standard antibiotic therapy, whereas low-risk patients did not receive antibiotics and were discharged from hospital after 12 hours of a febrile observation. End points were the feasibility of the treatment protocol. Results Of 196 assessable episodes, 76 (39%) were classified as high risk, 84 (43%) as medium risk, and 36 (18%) as low risk. There were no treatment failures in the low-risk group (95% CI, 0% to 10%). Therefore, sensitivity of our risk assessment model was 100% (95% CI, 90% to 100%), the specificity, positive, and negative predictive values were 21%, 13%, and 100%, respectively. Median duration of hospitalization was 3 days in the low-risk group versus 7 days in the medium- and high-risk groups (P < .0001). The incremental costs of the experimental treatment protocol amounted to a saving of €471 (US $572) for every potentially low-risk patient. Conclusion This risk assessment model appears to identify febrile neutropenic patients at low risk for bacterial infection. Antibiotics can be withheld in well-defined neutropenic patients with fever.

2018 ◽  
Vol 17 (5) ◽  
pp. 0-10
Author(s):  
Andrew J. Kruger ◽  
Fasika Aberra ◽  
Sylvester M. Black ◽  
Alice Hinton ◽  
James Hanje ◽  
...  

Introduction and aim. Hepatic encephalopathy (HE) is a common complication in cirrhotics and is associated with an increased healthcare burden. Our aim was to study independent predictors of 30-day readmission and develop a readmission risk model in patients with HE. Secondary aims included studying readmission rates, cost, and the impact of readmission on mortality. Material and methods. We utilized the 2013 Nationwide Readmission Database (NRD) for hospitalized patients with HE. A risk assessment model based on index hospitalization variables for predicting 30-day readmission was developed using multivariate logistic regression and validated with the 2014 NRD. Patients were stratified into Low Risk and High Risk groups. Cox regression models were fit to identify predictors of calendar-year mortality. Results. Of 24,473 cirrhosis patients hospitalized with HE, 32.4% were readmitted within 30-days. Predictors of readmission included presence of ascites (OR: 1.19; 95% CI: 1.06-1.33), receiving paracentesis (OR: 1.43; 95% CI: 1.26-1.62) and acute kidney injury (OR: 1.11; 95% CI: 1.00-1.22). Our validated model stratified patients into Low Risk and High Risk of 30-day readmissions (29% and 40%, respectively). The cost of the first readmission was higher than index admission in the 30-day readmission cohort ($14,198 vs. $10,386; p-value < 0.001). Thirty-day readmission was the strongest predictor of calendar-year mortality (HR: 4.03; 95% CI: 3.49-4.65). Conclusions. Nearly one-third of patients with HE were readmitted within 30-days, and early readmission adversely impacted healthcare utilization and calendar-year mortality. With our proposed simple risk assessment model, patients at high risk for early readmissions can be identified to potentially avert poor outcomes.


Blood ◽  
2019 ◽  
Vol 134 (Supplement_1) ◽  
pp. 3385-3385
Author(s):  
Mia Djulbegovic ◽  
Kevin Chen ◽  
Soundari Sureshanand ◽  
Sarwat Chaudhry

Background: Venous thromboembolism (VTE) is a common cause of morbidity and mortality in the United States. Annually, up to 1 in 120 people develop VTE, approximating the incidence of stroke. Given that hospitalization and acute medical illness increase the risk of VTE, hospital-associated VTE represents a preventable cause of morbidity and mortality. Accordingly, accreditation and regulatory agencies endorse inpatient pharmacologic VTE prophylaxis (PPX) as a quality measure. In order to raise rates of PPX prescribing, many health systems have adopted a default approach to electronic ordering, in which clinicians must "opt-out" of PPX prescription. However, this strategy may cause medical overuse and avoidable harms, which has prompted the American Society of Hematology (ASH) to recommend a risk-adapted approach to PPX. One risk model endorsed by ASH is the IMPROVE-VTE risk assessment model, which can identify patients who are at low risk for VTE and therefore may not warrant pharmacologic PPX. We therefore sought to compare the actual practice of PPX prescribing to the guideline-recommended strategy according the IMPROVE-VTE model in a large, contemporary population of medical inpatients. Methods: In this observational study, we used electronic health record data to identify adult, medical inpatients hospitalized on general medical and subspecialty services at Yale-New Haven Hospital from 1/1/14-12/31/18. We excluded patients who were pregnant, admitted for VTE, taking full dose anticoagulation on admission, admitted for bleeding, or had a platelet count of < 50,000/µL. For each patient, we calculated the IMPROVE-VTE score using the previously validated model weights: 3 points for a prior history of VTE; 2 points for known thrombophilia, lower limb paralysis, or active cancer; 1 point for immobilization, admission to the intensive care unit, or age ≥ 60 years. For each component other than age, we used ICD-9 and ICD-10 codes that were billed either prior to or upon admission to determine the presence of these risk factors. In order to simulate the decision to initiate PPX on hospital admission, we calculated each patient's IMPROVE-VTE score at the time of admission. In accordance with the ASH guidelines, we used an IMPROVE-VTE score of <2 to differentiate patients at low-risk of hospital-associated VTE from those at high-risk. We used inpatient medication order history data to determine receipt of pharmacologic PPX. We used χ2 testing to compare the relative frequency of PPX prescribing on admission between patients at low-risk and high-risk for VTE. Results: We identified 135,288 medical inpatients during the study period, of whom 99,380 met inclusion criteria. The average age was 63.5 years-old (standard deviation 18 years); 51% of patients were female; 68% of patients were white. Of all the included patients, 81% received pharmacologic prophylaxis; of these patients, 78% received unfractionated heparin subcutaneously and 22% received low molecular weight heparin subcutaneously. Among all hospitalized patients, 78% had an IMPROVE-VTE score of <2 (32% had a score of 0 and 46% had a score of 1). Among these patients at low risk of hospital-associated VTE, 81% received pharmacologic PPX. Differences in prophylaxis rates between patients at low vs high risk of VTE were statistically significant (p<0.001). Conclusion: In this contemporary cohort of adult, medical inpatients, >80% of patients who were at low risk of hospital-associated VTE received pharmacologic PPX, representing a group in whom PPX may be unnecessary. Using a risk-adapted approach such as the IMPROVE-VTE risk assessment model, rather than default PPX ordering, may reduce medical overuse and avoidable harms. Disclosures Chaudhry: CVS State of CT Clinical Pharmacy Program: Other: Paid Reviewer for CVS State of CT Clinical Pharmacy Program.


2021 ◽  
Vol 13 (2) ◽  
pp. 826
Author(s):  
Meiling Zhou ◽  
Xiuli Feng ◽  
Kaikai Liu ◽  
Chi Zhang ◽  
Lijian Xie ◽  
...  

Influenced by climate change, extreme weather events occur frequently, and bring huge impacts to urban areas, including urban waterlogging. Conducting risk assessments of urban waterlogging is a critical step to diagnose problems, improve infrastructure and achieve sustainable development facing extreme weathers. This study takes Ningbo, a typical coastal city in the Yangtze River Delta, as an example to conduct a risk assessment of urban waterlogging with high-resolution remote sensing images and high-precision digital elevation models to further analyze the spatial distribution characteristics of waterlogging risk. Results indicate that waterlogging risk in the city proper of Ningbo is mainly low risk, accounting for 36.9%. The higher-risk and medium-risk areas have the same proportions, accounting for 18.7%. They are followed by the lower-risk and high-risk areas, accounting for 15.5% and 9.6%, respectively. In terms of space, waterlogging risk in the city proper of Ningbo is high in the south and low in the north. The high-risk area is mainly located to the west of Jiangdong district and the middle of Haishu district. The low-risk area is mainly distributed in the north of Jiangbei district. These results are consistent with the historical situation of waterlogging in Ningbo, which prove the effectiveness of the risk assessment model and provide an important reference for the government to prevent and mitigate waterlogging. The optimized risk assessment model is also of importance for waterlogging risk assessments in coastal cities. Based on this model, the waterlogging risk of coastal cities can be quickly assessed, combining with local characteristics, which will help improve the city’s capability of responding to waterlogging disasters and reduce socio-economic loss.


2021 ◽  
Author(s):  
Rossella Murtas ◽  
Nuccia Morici ◽  
Chiara Cogliati ◽  
Massimo Puoti ◽  
Barbara Omazzi ◽  
...  

BACKGROUND The coronavirus disease 2019 (COVID-19) pandemic has generated a huge strain on the health care system worldwide. The metropolitan area of Milan, Italy was one of the most hit area in the world. OBJECTIVE Robust risk prediction models are needed to stratify individual patient risk for public health purposes METHODS Two predictive algorithms were implemented in order to foresee the probability of being a COVID-19 patient and the risk of being hospitalized. The predictive model for COVID-19 positivity was developed in 61.956 symptomatic patients, whereas the model for COVID-19 hospitalization was developed in 36.834 COVID-19 positive patients. Exposures considered were age, gender, comorbidities and symptoms associated with COVID-19 (vomiting, cough, fever, diarrhoea, myalgia, asthenia, headache, anosmia, ageusia, and dyspnoea). RESULTS The predictive models showed a good fit for predicting COVID-19 disease [AUC 72.6% (95% CI 71.6%-73.5%)] and hospitalization [AUC 79.8% (95% CI 78.6%-81%)]. Using these results, 118,804 patients with COVID-19 from October 25 to December 11, 2020 were stratified into low, medium and high risk for COVID-19 severity. Among the overall population, 67.030 (56%) were classified as low-risk, 43.886 (37%) medium-risk, and 7.888 (7%) high-risk, with 89% of the overall population being assisted at home, 9% hospitalized, and 2% dead. Among those assisted at home, most people (60%) were classified as low risk, whereas only 4% were classified at high risk. According to ordinal logistic regression, the OR of being hospitalised or dead was 5.0 (95% CI 4.6-5.4) in high-risk patients and 2.7 (95% CI 2.6-2.9) in medium-risk patients, as compared to low-risk patients. CONCLUSIONS A simple monitoring system, based on primary care datasets with linkage to COVID-19 testing results, hospital admissions data and death records may assist in proper planning and allocation of patients and resources during the ongoing COVID-19 pandemic.


2016 ◽  
Vol 116 (09) ◽  
pp. 530-536 ◽  
Author(s):  
David J. Rosenberg ◽  
Anne Press ◽  
Joanna Fishbein ◽  
Martin Lesser ◽  
Lauren McCullagh ◽  
...  

SummaryThe IMPROVE Bleed Risk Assessment Model (RAM) remains the only bleed RAM in hospitalised medical patients using 11 clinical and laboratory factors. The aim of our study was to externally validate the IMPROVE Bleed RAM. A retrospective chart review was conducted between October 1, 2012 and July 31, 2014. We applied the point scoring system to compute risk scores for each patient in the validation sample. We then dichotomised the patients into those with a score <7 (low risk) vs ≥ 7 (high risk), as outlined in the original study, and compared the rates of any bleed, non-major bleed, and major bleed. Among the 12,082 subjects, there was an overall 2.6 % rate of any bleed within 14 days of admission. There was a 2.12 % rate of any bleed in those patients with a score of < 7 and a 4.68 % rate in those with a score ≥ 7 [Odds Ratio (OR) 2.3 (95 % CI=1.8–2.9), p<0.0001]. MB rates were 1.5 % in the patients with a score of < 7 and 3.2 % in the patients with a score of ≥ 7, [OR 2.2 (95 % CI=1.6–2.9), p<0.0001]. The ROC curve was 0.63 for the validation sample. This study represents the largest externally validated Bleed RAM in a hospitalised medically ill patient population. A cut-off point score of 7 or above was able to identify a high-risk patient group for MB and any bleed. The IMPROVE Bleed RAM has the potential to allow for more tailored approaches to thromboprophylaxis in medically ill hospitalised patients.Supplementary Material to this article is available online at www.thrombosis-online.com.


Author(s):  
Jean Baptiste Ramampisendrahova ◽  
Andriamanantsialonina Andrianony ◽  
Aina Andrianina Vatosoa Rakotonarivo ◽  
Mamisoa Bodohasina Rasamoelina ◽  
Eric Andriantsoa ◽  
...  

The purpose of this research is to ascertain the prevalence of postoperative venous thromboembolism in the Department of Surgery at Anosiala University Hospital and to identify risk factors for developing postoperative venous thromboembolism using the Caprini Risk Assessment Model. From December 2017 to October 2019, this was a 22-month prospective cohort research conducted at Anosiala University Hospital. It included all adult patients over the age of 18 who were operated on in an emergency or on a planned basis by the Department of Surgery. This research included 662 participants. Within 30 days after surgery, the risk of venous thromboembolism was 0.3 percent. According to the overall Caprini score, 25.2 percent of patients were classified as having a low risk of venous thromboembolism, 25.2 percent as having a moderate risk, 29.5 percent as having a high risk, and 20.1 percent as having the greatest risk. Patients in the highest risk category (scoring 5) had a substantially increased chance of having venous thromboembolism after surgery (p = 0.0007). Only major open surgery was related with a statistically significant increase in postoperative venous thromboembolism (p = 0.028). Age 75 years, elective arthroplasty, and hip, pelvic, or leg fractures were not linked with postoperative venous thromboembolism statistically significantly (p> 0.05). Our findings indicate that the Caprini risk assessment model might be used successfully to avoid postoperative venous thromboembolism in surgical patients in Madagascar, since patients in the highest risk category had a considerably increased chance of developing postoperative venous thromboembolism.


2012 ◽  
Vol 30 (4_suppl) ◽  
pp. 678-678 ◽  
Author(s):  
Ramon Salazar ◽  
Jan Willem de Waard ◽  
Bengt Glimelius ◽  
John Marshall ◽  
Joost Klaase ◽  
...  

678 Background: An 18-gene expression profile, ColoPrint, has been developed for identifying CC patients more likely to develop recurrent disease and who would be candidates for adjuvant chemotherapy. The gene signature was validated in in-silico datasets and independent patient cohorts of stage II and III patients. Uni-and multivariate analysis was performed on the pooled stage II patient set (n=320) who had a median follow-up of 70 months. ColoPrint identified two-third of the stage II patients (209/320) as low risk. The 3-year relapse-free survival was 94% for Low Risk patients and 79% for High Risk patients with a HR of 2.74 (95% CI 1.54 - 4.88; p=0.006). Moreover, the profile stratified patients independent of ASCO clinical risk factors. Methods: A prospective trial, PARSC (Prospective study for the Assessment of Recurrence risk in Stage II CC patients) using ColoPrint has been initiated. Objectives are: (1) to validate the performance of ColoPrint in estimating the 3-year relapse rate in patients with stage II colon cancer; (2) to compare the risk assessment in stage II patients using the ColoPrint profile vs. a clinical risk assessment based on Investigator’s assessment of risk and ASCO high-risk recommendations; (3) to investigate therapy as a potential confounding factor for ColoPrint results; and (4) to assess the performance of ColoPrint in estimating the 3-year relapse rate in patients with stage III colon cancer. Inclusion criteria: age ≥ 18 years, adenocarcinoma of the colon, stage II and III, no prior neo-adjuvant therapy, no synchronous tumors, fresh tumor sample, and written informed consent. The treatment of the patient is at the discretion of the physician adhering to National Comprehensive Cancer Network (NCCN)-approved regimens or a recognized alternative. Results: The trial started in Sept. 2008 with currently 30 participating sites in 11 countries. Thus far, 288 eligible stage 2 and 251 stage 3 patients have been enrolled. Conclusions: The aim is to enroll 575 stage II patients to differentiate between 3 year RFS predicted by ColoPrint and clinical factors.


2013 ◽  
Vol 31 (15_suppl) ◽  
pp. 6596-6596
Author(s):  
Nelson Kohen ◽  
Ernesto Gil Deza ◽  
Natasha Gercovich ◽  
Eduardo L. Morgenfeld ◽  
Carlos Fernando Garcia Gerardi ◽  
...  

6596 Background: The oncological day hospital (ODH) at IOHM carries out 80 chemotherapies per day with 6 certified oncological nurses as staff. Human resources allocation in oncology has not been formally studied in relation to treatment risks. The objective of this paper is to present a risk assessment model for the rational allocation for human resources in the ODH using the KGD scale. Methods: The KGD scale was designed through a retrospective evaluation of more than 15,000 treatments (Tx). Between November 1st and December 1st, 2012, this instrument was validated with all new patients (Pt) beginning Tx at IOHM. The KGD scale evaluates risk according to: Five Pt characteristics (Elderly, Polymedicated, Without symptom control, Neuropsychiatric problems, Presence or absence of family members); Four Tx characteristics (New drugs, Complex protocol, High risk of acute toxicity, Infrequently used) and workplace context(New personnel, Holiday absences, With or without close medical support). The KGD scale was determined for each Tx and applied as follows: Low Risk (0-3 points): two nurses in the ODH, supervision is at the patient’s request and the chemotherapy can be administered at the beginning or end of the workday; Intermediate Risk (4-5 points): three nurses in the ODH, supervision is mandatory and the treatment can take place at any time in the workday; High Risk(6 or more points): four nurses in the ODH, supervision must be constant and the Tx must take place in the middle of the workday. The chemotherapy outcome was observed. Results: One hundred and thirty patients were admitted. Sex fem 74 (59%), male 56 (41%): age: 49y (range 22-87). Diagnosis: breast 40, colon: 21, lung: 16, ovaries:11, lymphoma: 11, testis:7, sarcoma: 5 ; others: 19 KGD risk assessment: Low Risk 25 pts (19 %); Intermediate Risk 77 pts (59%); High Risk 28 pts (21%). There were no complications in any of the 312 chemotherapy treatments administered to this cohort. Conclusions: 1) The KGD scale has shown to be a useful aid in the treatment risk assessment. 2) Use of the KGD scale allows for an efficient personnel allocation at the ODH according the Tx risk 3) The academic qualification of the nurses staff are mandatory to control the risk.


2019 ◽  
Vol 37 (4_suppl) ◽  
pp. 487-487 ◽  
Author(s):  
Jerome Galon ◽  
Fabienne Hermitte ◽  
Bernhard Mlecnik ◽  
Florence Marliot ◽  
Carlo Bruno Bifulco ◽  
...  

487 Background: Immunoscore Colon is an IVD test predicting the risk of relapse in early-stage colon cancer (CC) patients, by measuring the host immune response at the tumor site. It is a risk-assessment tool providing independent and superior prognostic value than the usual tumor risk parameters and is intended to be used as an adjunct to the TNM classification. Risk assessment is particularly important to decide when to propose an adjuvant (adj.) treatment for stage (St) II CC patients. High-risk stage II patients defined as those with poor prognostic features including T4, lymph nodes < 12, poor differentiation, VELIPI, bowel obstruction/perforation can be considered for adj. chemotherapy (CT). However, additional risk factors are needed to guide treatment decisions. Methods: A subgroup analysis was performed on the St II untreated patients (n = 1130) from the Immunoscore international validation study (Pagès The Lancet 2018). The high-risk patients (with at least 1 clinico-pathological high-risk feature) were classified in 2 categories using pre-defined cutoffs: Low Immunoscore versus High Immunoscore and their five-year time to recurrence (5Y TTR) was compared to the TTR of the low-risk patients (without any clinico-pathological high-risk feature). Results: Among the patients with high-risk features (n = 630), 438 (69.5%) had a High Immunoscore with a corresponding 5Y TTR of 87.4 (95% CI 83.9-91.0), statistically similar (logrank pv not stratified p > 0.42, wald pv stratified by center p > 0.20) to the TTR 89.1 (95% CI 86.1-92.1) observed for the 500 low-risk patients (with no clinico-pathological feature). Furthermore, 5Y TTR for these patients were statistically similar to those of St II patients with high-risk features and a High Immunoscore (n = 438), who received adj. CT (n = 162) (5Y TTR of 83.4 (95% CI 77.6-89.9). Conclusions: These data show that despite the presence of high-risk features that usually trigger adj. treatment, when not treated with CT, a significant part of these patients (69.5%) have a recurrence risk similar to the low risk patients. Therefore, the Immunoscore test could be a good tool for adj. treatment decision in St II patients.


2013 ◽  
Vol 726-731 ◽  
pp. 1085-1088
Author(s):  
Xue Long Chen ◽  
Xiao Long Wang

a risk model to assess the environmental risk of wastewater from the traditional Chinese medicine manufacturers was set to cope with the increasing pollution. The Klebsiella planticola was selected as the indicator because of the sensitive reaction of its mass growth, the highest correlationship(r=0.989) with significance (P=0.001<0.01) along with the change of the wastewater’s concentrations and the perfect coefficient of fitting function (R2=1). The dose-effective relationship among microbial indicator and pollutants, which was analyzed and verified, was adopted to generate a fitting function. The fitting function equation was y=-0.945x4+0.971x3+0.314x2-0.114x +0.301; Thus, different risk levels were divided: No risk (0.2973≤OD600≤0.3010), Low risk (0.3010<OD600<0.4325, 0.1505<OD600<0.2973), Medium risk (0.4325≤OD600<0.5640, 0.1505≤OD600<0), High risk (0.5640≤OD600, OD600≤0.000). The sensitivity and precision of the risk assessment model could be guaranteed by the characteristics of the microbial indicator


Sign in / Sign up

Export Citation Format

Share Document