scholarly journals Initial Experience in Predicting the Risk of Hospitalization of 496 Outpatients with COVID-19 Using a Telemedicine Risk Assessment Tool

Author(s):  
James B O'Keefe ◽  
Elizabeth J Tong ◽  
Thomas H Taylor ◽  
Ghazala D Datoo O'Keefe ◽  
David C Tong

Objective: To determine whether a risk prediction tool developed and implemented in March 2020 accurately predicts subsequent hospitalizations. Design: Retrospective cohort study, enrollment from March 24 to May 26, 2020 with follow-up calls until hospitalization or clinical improvement (final calls until June 19, 2020) Setting: Single center telemedicine program managing outpatients from a large medical system in Atlanta, Georgia Participants: 496 patients with laboratory-confirmed COVID-19 in isolation at home. Exclusion criteria included: (1) hospitalization prior to telemedicine program enrollment, (2) immediate discharge with no follow-up calls due to resolution. Exposure: Acute COVID-19 illness Main Outcome and Measures: Hospitalization was the outcome. Days to hospitalization was the metric. Survival analysis using Cox regression was used to determine factors associated with hospitalization. Results: The risk-assessment rubric assigned 496 outpatients to risk tiers as follows: Tier 1, 237 (47.8%); Tier 2, 185 (37.3%); Tier 3, 74 (14.9%). Subsequent hospitalizations numbered 3 (1%), 15 (7%), and 17 (23%) and for Tiers 1-3, respectively. From a Cox regression model with age ≥ 60, gender, and self-reported obesity as covariates, the adjusted hazard ratios using Tier 1 as reference were: Tier 2 HR=3.74 (95% CI, 1.06-13.27; P=0.041); Tier 3 HR=10.87 (95% CI, 3.09-38.27; P<0.001). Tier was the strongest predictor of time to hospitalization. Conclusions and Relevance: A telemedicine risk assessment tool prospectively applied to an outpatient population with COVID-19 identified both low-risk and high-risk patients with better performance than individual risk factors alone. This approach may be appropriate for optimum allocation of resources.

2020 ◽  
Author(s):  
James B O'Keefe ◽  
Elizabeth J Tong ◽  
Thomas H Taylor Jr ◽  
Ghazala A Datoo O’Keefe ◽  
David C Tong

BACKGROUND Risk assessment of patients with acute COVID-19 in a telemedicine context is not well described. In settings of large numbers of patients, a risk assessment tool may guide resource allocation not only for patient care but also for maximum health care and public health benefit. OBJECTIVE The goal of this study was to determine whether a COVID-19 telemedicine risk assessment tool accurately predicts hospitalizations. METHODS We conducted a retrospective study of a COVID-19 telemedicine home monitoring program serving health care workers and the community in Atlanta, Georgia, with enrollment from March 24 to May 26, 2020; the final call range was from March 27 to June 19, 2020. All patients were assessed by medical providers using an institutional COVID-19 risk assessment tool designating patients as Tier 1 (low risk for hospitalization), Tier 2 (intermediate risk for hospitalization), or Tier 3 (high risk for hospitalization). Patients were followed with regular telephone calls to an endpoint of improvement or hospitalization. Using survival analysis by Cox regression with days to hospitalization as the metric, we analyzed the performance of the risk tiers and explored individual patient factors associated with risk of hospitalization. RESULTS Providers using the risk assessment rubric assigned 496 outpatients to tiers: Tier 1, 237 out of 496 (47.8%); Tier 2, 185 out of 496 (37.3%); and Tier 3, 74 out of 496 (14.9%). Subsequent hospitalizations numbered 3 out of 237 (1.3%) for Tier 1, 15 out of 185 (8.1%) for Tier 2, and 17 out of 74 (23%) for Tier 3. From a Cox regression model with age of 60 years or older, gender, and reported obesity as covariates, the adjusted hazard ratios for hospitalization using Tier 1 as reference were 3.74 (95% CI 1.06-13.27; <i>P</i>=.04) for Tier 2 and 10.87 (95% CI 3.09-38.27; <i>P</i>&lt;.001) for Tier 3. CONCLUSIONS A telemedicine risk assessment tool prospectively applied to an outpatient population with COVID-19 identified populations with low, intermediate, and high risk of hospitalization.


2020 ◽  
Author(s):  
Samaneh Asgari ◽  
Fatemeh Moosaie ◽  
Davood Khalili ◽  
Fereidoun Azizi ◽  
Farzad Hadaegh

Abstract Background: High burden of chronic cardio-metabolic disease (CCD) including type 2 diabetes mellitus (T2DM), chronic kidney disease (CKD), and cardiovascular disease (CVD) have been reported in the Middle East and North Africa region. We aimed to externally validate a Europoid risk assessment tool designed by Alssema et al, including non-laboratory measures, for the prediction of the CCD in the Iranian population. Methods: The predictors included age, body mass index, waist circumference, use of antihypertensive, current smoking, and family history of cardiovascular disease and or diabetes. For external validation of the model in the Tehran lipids and glucose study (TLGS), the Area under the curve (AUC) and the Hosmer-Lemeshow (HL) goodness of fit test were performed for discrimination and calibration, respectively. Results: Among 1310 men and 1960 women aged 28-85 years, 29.5% and 47.4% experienced CCD during the 6 and 9-year follow-up, respectively. The model showed acceptable discrimination, with an AUC of 0.72(95% CI: 0.69-0.75) for men and 0.73(95% CI: 0.71-0.76) for women. The calibration of the model was good for both genders (min HL P=0.5). Considering separate outcomes, AUC was highest for CKD (0.76(95% CI: 0.72-0.79)) and lowest for T2DM (0.65(95% CI: 0.61-0.69)), in men. As for women, AUC was highest for CVD (0.82(95% CI: 0.78-0.86)) and lowest for T2DM (0.69(95% CI: 0.66-0.73)). The 9-year follow-up demonstrated almost similar performances compared to the 6-year follow-up. Conclusion: This model showed acceptable discrimination and good calibration for risk prediction of CCD in short and long-term follow-up in the Iranian population.


2020 ◽  
Author(s):  
Samaneh Asgari ◽  
Fatemeh Moosaie ◽  
Davood Khalili ◽  
Fereidoun Azizi ◽  
Farzad Hadaegh

Abstract Background: High burden of chronic cardio-metabolic disorders including type 2 diabetes mellitus (T2DM), chronic kidney disease (CKD), and cardiovascular disease (CVD) have been reported in the Middle East and North Africa region. We aimed to externally validate a non-laboratory risk assessment tool for the prediction of the chronic cardio-metabolic disorders in the Iranian population. Methods: The predictors included age, body mass index, waist circumference, use of antihypertensive, current smoking, and family history of cardiovascular disease and/or diabetes. For external validation of the model in the Tehran lipids and glucose study (TLGS), the Area under the curve (AUC) and the Hosmer-Lemeshow (HL) goodness of fit test were performed for discrimination and calibration, respectively. Results: Among 1310 men and 1960 women aged 28-85 years, 29.5% and 47.4% experienced chronic cardio-metabolic disorders during the 6 and 9-year follow-up, respectively. The model showed acceptable discrimination, with an AUC of 0.72(95% CI: 0.69-0.75) for men and 0.73(95% CI: 0.71-0.76) for women. The calibration of the model was good for both genders (min HL P=0.5). Considering separate outcomes, AUC was highest for CKD (0.76(95% CI: 0.72-0.79)) and lowest for T2DM (0.65(95% CI: 0.61-0.69)), in men. As for women, AUC was highest for CVD (0.82(95% CI: 0.78-0.86)) and lowest for T2DM (0.69(95% CI: 0.66-0.73)). The 9-year follow-up demonstrated almost similar performances compared to the 6-year follow-up. Using Cox regression in place of logistic multivariable analysis, model’s discrimination and calibration were reduced for prediction of chronic cardio-metabolic disorders; the issue which had more effect on the prediction of incident CKD among women. Moreover, adding data of educational levels and marital status did not improve, the discrimination and calibration in the enhanced model.Conclusion: This model showed acceptable discrimination and good calibration for risk prediction of chronic cardio-metabolic disorders in short and long-term follow-up in the Iranian population.


2017 ◽  
Vol 35 (15_suppl) ◽  
pp. e15135-e15135
Author(s):  
Laura W. Musselwhite ◽  
Thomas S. Redding ◽  
Kellie J. Sims ◽  
Meghan O'Leary ◽  
Elizabeth R. Hauser ◽  
...  

e15135 Background: Refining screening to colorectal cancer (CRC) risk may promote screening effectiveness. We applied the National Cancer Institute (NCI) CRC Risk Assessment Tool to estimate 5- and 10-year CRC risk in an average-risk Veterans cohort undergoing screening colonoscopy with follow-up. Methods: This was a prospective evaluation of predicted to actual risk of CRC using the NCI CRC Risk Assessment Tool in male Veterans undergoing screening colonoscopy with a median follow-up of 10 years.Family, medical, dietary and physical activity histories were collected at enrollment and used to calculate absolute 5- and 10-year CRC risk, and to compare tertiles of expected to observed CRC risk. Sensitivity analyses were performed. Results: For 2,934 male Veterans with complete data (average age 62.4 years, 15% minorities), 1.3% (N=30) and 1.7% (N=50) were diagnosed with CRC within 5 and 10 years of survey completion. The area under the curve for predicting CRC was 0.69 (95% CI; 0.61-0.78) at 5 years and 0.67 (95% CI, 0.59-0.75) at 10 years. We calculated the sensitivity (0.60, 95% CI; 0.45-0.73), specificity (0.67, 95% CI; 0.65-0.69) positive predictive value (0.031, 95% CI; 0.02-0.04) and negative predictive value (0.99, 95% CI; 0.98-0.99). Conclusions: The NCI CRC Risk Assessment Tool was well-calibrated at 5 years and overestimated CRC risk at 10 years, had modest discriminatory function, and a high NPV in a cohort of ethnically diverse male Veterans. This tool reliably excludes 10-year CRC in low-scoring individuals and may inform patient-provider decision making when the benefit of screening is uncertain. [Table: see text]


2021 ◽  
Vol 108 (Supplement_5) ◽  
Author(s):  
J Y Ming ◽  
M Holmes ◽  
P Pockney ◽  
J Gani

Abstract Introduction Multiple tools (NELA, P-POSSUM, ACS-NSQIP) are available to assess mortality risks in patients requiring emergency laparotomy(1–3), but they are time-consuming to perform and have had limited uptake in routine clinical practice in many countries(4). Simpler measures, including psoas muscle: L3 vertebrae (PM: L3) ratio(5,6), may be useful alternates. This measure is quick to perform, requiring no special skills or equipment apart from basic CT viewing software. Method We performed an analysis on all patients in the Hunter Emergency Laparotomy Audit (HELA) database, from January 2016 to December 2017. HELA is a retrospective review of all emergency laparotomy undertaken in a discrete area in NSW, Australia. Patients with an available CT abdomen were included (N = 500/562). A single slice axial CT image at the L3 endplate level was analysed using ImageJ® software to measure the area of L3 and bilateral psoas muscles. This can be done using normal PACS software in routine practice. Result PM: L3 ratios in this cohort have a mean of 1.082 (95%CI 1.042–1.122; range 0.141–3.934). PM: L3 ratio is significantly lower (P &lt; 0.00001) in those patients who did not survive beyond 30 days (mean 0.865 [95% CI 0.746–0.984]) and 90 days (mean 0.888 [95%CI 0.768–1.008]) compared to patients that survived these periods (30 day mean 1.106 [95% vs. 1.033–1.179], 90 day mean 1.112 [95% CI 1.070–1.154]). These associations are similar to those calculated by established risk assessment models. Conclusion PM: L3 ratio is a reliable, quick and easy risk assessment tool to identify high risk patients undergoing emergency laparotomy. Take-home Message PM: L3 ratio is a reliable, quick and easy risk assessment tool to identify high risk patients undergoing emergency laparotomy. It is comparable to NELA, P-POSSUM and ACS-NSQIP.


2019 ◽  
Vol 49 (3) ◽  
pp. 292-298 ◽  
Author(s):  
Ran Wang ◽  
Angela Simpson ◽  
Adnan Custovic ◽  
Phil Foden ◽  
Danielle Belgrave ◽  
...  

2016 ◽  
Vol 5 (1) ◽  
pp. 11-20
Author(s):  
Novia Rita ◽  
Novrianti Novrianti

Area X merupakan salah satu area yang terdapat di Lapangan Y PT. Chevron Pacific Indonesia, dimana area X terdiri dari 563 sumur. Pada Area X ini dilakukan pekerjaan tes terhadap sumur sebanyak 2 kali per bulan, sehingga untuk 563 sumur diperlukan 1126 kali tes perbulan. Fasilitas yang tersedia untuk production well test pada Area X hanya mampu 960 kali tes per bulan. Sehingga 116 sumur tidak akan mendapatkan jadwal tes pada setiap bulannya. Apabila prosedur seperti ini tetap dilakukan secara terus menerus maka akan selalu terdapat sisa sumur yang belum terpenuhi untuk dilakukan tes di setiap bulannya. Untuk mengatasi permasalahan ini dilakukan Tiering system. Tiering system adalah suatu metode dalam proses pengujian sumur dimana dalam metode ini sumur-sumur akan dikelompokkan berdasarkan produksi terbesar hingga terkecil. Sumur yang tergolong big production akan berada pada urutan teratas untuk dilakukan Well Testing (Tier #1) dan diikuti Tier #2, Tier #3 dan Tier #4 (Tiering System merupakan metode atau proses yang digunakan untuk mengelompokan data-data production well testing sumur yang banyak menjadi kelompok kelompak kecil, yang bertujuan untuk membantu mengoptimalisasi proses pekerjaan well test di Lapangan (Human Resources Sumatra Operation, 2012).. Kuantitas test sumur setiap bulan akan disesuaikan dengan kebutuhan data dan kategori Tier, hal ini bertujuan untuk mendapatkan data yang valid secara continue pada sumur, sehingga cepat diketahui dan di follow up jika terjadi permasalahan penurunan produksi pada sumur-sumur tersebut. Dengan Tiering System, maka 563 sumur yang harus dilakukan well testing setiap bulannya di Area X jadi terpenuhi karena hanya membutuhkan 777 kali tes perbulan. Bahkan waktu pelaksanaan well test masih tersisa untuk 183 kali tes, hal ini juga berdampak pada kenaikan produksi sebesar 5441 bbl per hari dengan keuntungan sebesar US$ 217.621,75.


2007 ◽  
Vol 31 (11) ◽  
pp. 418-420 ◽  
Author(s):  
Helen Smith ◽  
Tom White

AIMS AND METHODTo assess the feasibility of using a structured risk assessment tool (Historical Clinical Risk 20-Item (HCR–20) Scale) in general adult psychiatry admissions and the characteristics of ‘high-risk’ patients. A notes review and interviews were used to conduct an HCR–20 assessment of 135 patients admitted to Murray Royal Hospital, Scotland.RESULTSPatients scoring higher on the HCR–20 were discharged earlier and more likely to have a diagnosis of personality disorder and a comorbid diagnosis.CLINICAL IMPLICATIONSIt was possible to complete an HCR–20 assessment of over 80% of patients within 48 h of admission.


2020 ◽  
Vol 7 (Supplement_1) ◽  
pp. S677-S678
Author(s):  
Justin B Searns ◽  
Amy Stein ◽  
Christine MacBrayne ◽  
Tara Sarin ◽  
Taylor Lin ◽  
...  

Abstract Background Over 90% of children with reported penicillin allergy can tolerate penicillin without incident. Developing effective and safe strategies to remove inappropriate penicillin allergies has the potential to improve care; however, guidance on how to identify, test, and delabel patients is limited. Methods In April 2019, Children’s Hospital Colorado (CHCO) implemented a penicillin allergy clinical pathway (CP) alongside a risk assessment tool to stratify patients based on allergic history (Figure 1). Patients at “no increased risk” were educated and delabeled without testing. Low risk patients were offered an oral amoxicillin drug challenge with close observation. A single, non-graded, treatment dose of amoxicillin (45 mg/kg, max dose 1000mg) was used for low risk patients, and no preceding allergic skin testing was performed. Patients with no signs or symptoms of allergic response 60 minutes after amoxicillin administration were delabeled. Children delabeled of penicillin allergies on the CHCO hospital medicine service were compared between the pre-CP (1/1/17-3/31/19) and post-CP (4/1/19-3/31/20) cohorts. Figure 1. Penicillin Allergy Risk Assessment Results Pre-CP, 683/10624 (6.4%) patients reported a penicillin allergy and 18/683 (2.6%) were delabeled by discharge. Post-CP, 345/6559 (5.3%) patients reported a penicillin allergy and 47/345 (13.6%) were delabeled by discharge (P-value &lt; 0.0001, Figure 2). Among the 47 post-CP patients, 11 were delabeled by history alone, 19 underwent oral amoxicillin drug challenge per CP, and 17 received a different treatment dose penicillin per treatment team. Only one penicillin-exposed patients had a reaction. This patient developed a delayed, non-progressive rash and had penicillin allergy restored to their chart. No patient required emergency medical intervention, and none were “relabeled” penicillin allergic in the 6 months following discharge. Figure 2. Monthly Rate of Penicillin Allergic Patients Delabeled by Discharge Conclusion A drug challenge using a single non-graded dose of oral amoxicillin is a safe and effective strategy to delabel low risk children of inappropriate penicillin allergies when implemented alongside a risk assessment tool. Further studies are needed to evaluate the long-term benefits of delabeling inappropriate penicillin allergies and to continue monitoring for adverse events. Disclosures All Authors: No reported disclosures


Sign in / Sign up

Export Citation Format

Share Document