scholarly journals Clinical prediction of thrombectomy eligibility: A systematic review and 4-item decision tree

2018 ◽  
Vol 14 (5) ◽  
pp. 530-539 ◽  
Author(s):  
Gaia T Koster ◽  
T Truc My Nguyen ◽  
Erik W van Zwet ◽  
Bjarty L Garcia ◽  
Hannah R Rowling ◽  
...  

Background A clinical large anterior vessel occlusion (LAVO)-prediction scale could reduce treatment delays by allocating intra-arterial thrombectomy (IAT)-eligible patients directly to a comprehensive stroke center. Aim To subtract, validate and compare existing LAVO-prediction scales, and develop a straightforward decision support tool to assess IAT-eligibility. Methods We performed a systematic literature search to identify LAVO-prediction scales. Performance was compared in a prospective, multicenter validation cohort of the Dutch acute Stroke study (DUST) by calculating area under the receiver operating curves (AUROC). With group lasso regression analysis, we constructed a prediction model, incorporating patient characteristics next to National Institutes of Health Stroke Scale (NIHSS) items. Finally, we developed a decision tree algorithm based on dichotomized NIHSS items. Results We identified seven LAVO-prediction scales. From DUST, 1316 patients (35.8% LAVO-rate) from 14 centers were available for validation. FAST-ED and RACE had the highest AUROC (both >0.81, p < 0.01 for comparison with other scales). Group lasso analysis revealed a LAVO-prediction model containing seven NIHSS items (AUROC 0.84). With the GACE (Gaze, facial Asymmetry, level of Consciousness, Extinction/inattention) decision tree, LAVO is predicted (AUROC 0.76) for 61% of patients with assessment of only two dichotomized NIHSS items, and for all patients with four items. Conclusion External validation of seven LAVO-prediction scales showed AUROCs between 0.75 and 0.83. Most scales, however, appear too complex for Emergency Medical Services use with prehospital validation generally lacking. GACE is the first LAVO-prediction scale using a simple decision tree as such increasing feasibility, while maintaining high accuracy. Prehospital prospective validation is planned.

2019 ◽  
Vol 114 (1) ◽  
pp. S373-S373 ◽  
Author(s):  
Parambir Dulai ◽  
Leonard Guizzetti ◽  
Tony Ma ◽  
Vipul Jairath ◽  
Siddharth Singh ◽  
...  

2020 ◽  
Vol 93 (8) ◽  
pp. 1007-1012
Author(s):  
Marieke F. A. van Hoffen ◽  
Giny Norder ◽  
Jos W. R. Twisk ◽  
Corné A. M. Roelen

Abstract Purpose A previously developed prediction model and decision tree were externally validated for their ability to identify occupational health survey participants at increased risk of long-term sickness absence (LTSA) due to mental disorders. Methods The study population consisted of N = 3415 employees in mobility services who were invited in 2016 for an occupational health survey, consisting of an online questionnaire measuring the health status and working conditions, followed by a preventive consultation with an occupational health provider (OHP). The survey variables of the previously developed prediction model and decision tree were used for predicting mental LTSA (no = 0, yes = 1) at 1-year follow-up. Discrimination between survey participants with and without mental LTSA was investigated with the area under the receiver operating characteristic curve (AUC). Results A total of n = 1736 (51%) non-sick-listed employees participated in the survey and 51 (3%) of them had mental LTSA during follow-up. The prediction model discriminated (AUC = 0.700; 95% CI 0.628–0.773) between participants with and without mental LTSA during follow-up. Discrimination by the decision tree (AUC = 0.671; 95% CI 0.589–0.753) did not differ significantly (p = 0.62) from discrimination by the prediction model. Conclusion At external validation, the prediction model and the decision tree both poorly identified occupational health survey participants at increased risk of mental LTSA. OHPs could use the decision tree to determine if mental LTSA risk factors should be explored in the preventive consultation which follows after completing the survey questionnaire.


2018 ◽  
Vol 128 (3) ◽  
pp. 942-947 ◽  
Author(s):  
Sasha Vaziri ◽  
Jacob Wilson ◽  
Joseph Abbatematteo ◽  
Paul Kubilis ◽  
Saptarshi Chakraborty ◽  
...  

OBJECTIVEThe American College of Surgeons (ACS) National Surgical Quality Improvement Program (NSQIP) universal Surgical Risk Calculator is an online decision-support tool that uses patient characteristics to estimate the risk of adverse postoperative events. Further validation of this risk calculator in the neurosurgical population is needed; therefore, the object of this study was to assess the predictive performance of the ACS NSQIP Surgical Risk Calculator in neurosurgical patients treated at a tertiary care center.METHODSA single-center retrospective review of 1006 neurosurgical patients treated in the period from September 2011 through December 2014 was performed. Individual patient characteristics were entered into the NSQIP calculator. Predicted complications were compared with actual occurrences identified through chart review and administrative quality coding data. Statistical models were used to assess the predictive performance of risk scores. Traditionally, an ideal risk prediction model demonstrates good calibration and strong discrimination when comparing predicted and observed events.RESULTSThe ACS NSQIP risk calculator demonstrated good calibration between predicted and observed risks of death (p = 0.102), surgical site infection (SSI; p = 0.099), and venous thromboembolism (VTE; p = 0.164) Alternatively, the risk calculator demonstrated a statistically significant lack of calibration between predicted and observed risk of pneumonia (p = 0.044), urinary tract infection (UTI; p < 0.001), return to the operating room (p < 0.001), and discharge to a rehabilitation or nursing facility (p < 0.001). The discriminative performance of the risk calculator was assessed using the c-statistic. Death (c-statistic 0.93), UTI (0.846), and pneumonia (0.862) demonstrated strong discriminative performance. Discharge to a rehabilitation facility or nursing home (c-statistic 0.794) and VTE (0.767) showed adequate discrimination. Return to the operating room (c-statistic 0.452) and SSI (0.556) demonstrated poor discriminative performance. The risk prediction model was both well calibrated and discriminative only for 30-day mortality.CONCLUSIONSThis study illustrates the importance of validating universal risk calculators in specialty-specific surgical populations. The ACS NSQIP Surgical Risk Calculator could be used as a decision-support tool for neurosurgical informed consent with respect to predicted mortality but was poorly predictive of other potential adverse events and clinical outcomes.


2020 ◽  
Vol 25 (2) ◽  
pp. 183-199 ◽  
Author(s):  
Zhe Zhang ◽  
Zhi Ye Koh ◽  
Florence Ling

Purpose This study aims to develop benchmarks of the financial performance of contractors and a decision support tool for evaluation, selection and appointment of contractors. The financial benchmarks allow contractors to know where they are relative to the best-performing contractors, and they can then take steps to improve their own performance. The decision support tool helps clients to decide which contractor should be awarded the project. Design/methodology/approach Financial data between 2013 and 2015 of 44 Singapore-based contractors were acquired from a Singaporean public agency. Benchmarks for Z-score and financial ratios were developed. A decision tree for evaluating contractors was constructed. Findings This study found that between 57% and 64% of contractors stayed in the financially healthy zone from 2013 to 2015. Ratios related to financial liabilities are relatively bad compared with international standards. Research limitations/implications The limitation is that the data is obtained from a cross-sectional survey of contractors’ financial performance in Singapore over a three-year period. Regarding the finding that ratios relating to financial liabilities are weak, the implication is that contractors need to reduce their financial liabilities to achieve a good solvency profile. Contractors may use the benchmarks to check their financial performances relative to that of their competitors. To reduce financial risks, project clients may use these benchmarks to examine contractors’ financial performance. Originality/value This study provides benchmarks for contractors and clients to examine the financial performance of contractors in Singapore. A decision tree is provided to aid clients in making decisions on which contractors to appoint.


2021 ◽  
pp. bjophthalmol-2020-318719
Author(s):  
Aldina Pivodic ◽  
Helena Johansson ◽  
Lois E H Smith ◽  
Anna-Lena Hård ◽  
Chatarina Löfqvist ◽  
...  

Background/AimsPrematurely born infants undergo costly, stressful eye examinations to uncover the small fraction with retinopathy of prematurity (ROP) that needs treatment to prevent blindness. The aim was to develop a prediction tool (DIGIROP-Screen) with 100% sensitivity and high specificity to safely reduce screening of those infants not needing treatment. DIGIROP-Screen was compared with four other ROP models based on longitudinal weights.MethodsData, including infants born at 24–30 weeks of gestational age (GA), for DIGIROP-Screen development (DevGroup, N=6991) originate from the Swedish National Registry for ROP. Three international cohorts comprised the external validation groups (ValGroups, N=1241). Multivariable logistic regressions, over postnatal ages (PNAs) 6–14 weeks, were validated. Predictors were birth characteristics, status and age at first diagnosed ROP and essential interactions.ResultsROP treatment was required in 287 (4.1%)/6991 infants in DevGroup and 49 (3.9%)/1241 in ValGroups. To allow 100% sensitivity in DevGroup, specificity at birth was 53.1% and cumulatively 60.5% at PNA 8 weeks. Applying the same cut-offs in ValGroups, specificities were similar (46.3% and 53.5%). One infant with severe malformations in ValGroups was incorrectly classified as not needing screening. For all other infants, at PNA 6–14 weeks, sensitivity was 100%. In other published models, sensitivity ranged from 88.5% to 100% and specificity ranged from 9.6% to 45.2%.ConclusionsDIGIROP-Screen, a clinical decision support tool using readily available birth and ROP screening data for infants born GA 24–30 weeks, in the European and North American populations tested can safely identify infants not needing ROP screening. DIGIROP-Screen had equal or higher sensitivity and specificity compared with other models. DIGIROP-Screen should be tested in any new cohort for validation and if not validated it can be modified using the same statistical approaches applied to a specific clinical setting.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Janusz Wojtusiak ◽  
Negin Asadzadehzanjani ◽  
Cari Levy ◽  
Farrokh Alemi ◽  
Allison E. Williams

Abstract Background Assessment of functional ability, including activities of daily living (ADLs), is a manual process completed by skilled health professionals. In the presented research, an automated decision support tool, the Computational Barthel Index Tool (CBIT), was constructed that can automatically assess and predict probabilities of current and future ADLs based on patients’ medical history. Methods The data used to construct the tool include the demographic information, inpatient and outpatient diagnosis codes, and reported disabilities of 181,213 residents of the Department of Veterans Affairs’ (VA) Community Living Centers. Supervised machine learning methods were applied to construct the CBIT. Temporal information about times from the first and the most recent occurrence of diagnoses was encoded. Ten-fold cross-validation was used to tune hyperparameters, and independent test sets were used to evaluate models using AUC, accuracy, recall and precision. Random forest achieved the best model quality. Models were calibrated using isotonic regression. Results The unabridged version of CBIT uses 578 patient characteristics and achieved average AUC of 0.94 (0.93–0.95), accuracy of 0.90 (0.89–0.91), precision of 0.91 (0.89–0.92), and recall of 0.90 (0.84–0.95) when re-evaluating patients. CBIT is also capable of predicting ADLs up to one year ahead, with accuracy decreasing over time, giving average AUC of 0.77 (0.73–0.79), accuracy of 0.73 (0.69–0.80), precision of 0.74 (0.66–0.81), and recall of 0.69 (0.34–0.96). A simplified version of CBIT with 50 top patient characteristics reached performance that does not significantly differ from full CBIT. Conclusion Discharge planners, disability application reviewers and clinicians evaluating comparative effectiveness of treatments can use CBIT to assess and predict information on functional status of patients.


2020 ◽  
Author(s):  
Janusz Wojtusiak ◽  
Negin Asadzadehzanjani ◽  
Cari Levy ◽  
Farrokh Alemi ◽  
Allison E. Williams

Abstract Background: Assessment of functional ability, including Activities of Daily Living (ADLs), is a manual process completed by skilled health professionals. In the presented research, an automated decision support tool, the Computational Barthel Index Tool (CBIT), was constructed that can automatically assess and predict probabilities of current and future ADLs based on patients’ medical history.Methods: The data used to construct the tool include the demographic information, inpatient and outpatient diagnosis codes, and reported disabilities of 181,213 residents of the Department of Veterans Affairs’ (VA) Community Living Centers. Supervised machine learning methods were applied to construct the CBIT. Temporal information about times from the first and the most recent occurrence of diagnoses was encoded. Ten-fold cross-validation was used to tune hyperparameters, and independent test sets were used to evaluate models using AUC, accuracy, recall and precision. Random forest achieved the best model quality. Models were calibrated using isotonic regression.Results: The unabridged version of CBIT uses 578 patient characteristics and achieved average AUC of 0.94 (0.93-0.95), accuracy of 0.90 (0.89-0.91), precision of 0.91 (0.89-0.92), and recall of 0.90 (0.84-0.95) when re-evaluating patients. CBIT is also capable of predicting ADLs up to one year ahead, with accuracy decreasing over time, giving average AUC of 0.77 (0.73-0.79), accuracy of 0.73 (0.69-0.80), precision of 0.74 (0.66-0.81), and recall of 0.69 (0.34-0.96). A simplified version of CBIT with 50 top patient characteristics reached performance that does not significantly differ from full CBIT.Conclusion: Discharge planners, disability application reviewers and clinicians evaluating comparative effectiveness of treatments can use CBIT to assess and predict information on functional status of patients.


Sign in / Sign up

Export Citation Format

Share Document