Predicting Adverse Perioperative Events in Patients Undergoing Primary Cleft Palate Repair

2018 ◽  
Vol 55 (4) ◽  
pp. 574-581 ◽  
Author(s):  
Marten N. Basta ◽  
John E. Fiadjoe ◽  
Albert S. Woo ◽  
Kenneth N. Peeples ◽  
Oksana A. Jackson

Objective: This study aimed to identify risk factors for adverse perioperative events (APEs) after cleft palatoplasty to develop an individualized risk assessment tool. Design: Retrospective cohort. Setting: Tertiary institutional. Patients: Patients younger than 2 years with cleft palate. Interventions: Primary Furlow palatoplasty between 2008 and 2011. Main Outcome Measure(s): Adverse perioperative event, defined as laryngo- or bronchospasm, accidental extubation, reintubation, obstruction, hypoxia, or unplanned intensive care unit admission. Results: Three hundred patients averaging 12.3 months old were included. Cleft distribution included submucous, 1%; Veau 1, 17.3%; Veau 2, 38.3%; Veau 3, 30.3%; and Veau 4, 13.0%. Pierre Robin (n = 43) was the most prevalent syndrome/anomaly. Eighty-three percent of patients received reversal of neuromuscular blockade, and total morphine equivalent narcotic dose averaged 0.19 mg/kg. Sixty-nine patients (23.0%) had an APE, most commonly hypoventilation (10%) and airway obstruction (8%). Other APEs included reintubation (4.7%) and laryngobronchospasm (3.3%). APE was associated with multiple intubation attempts (odds ratio [OR] = 6.6, P = .001), structural or functional airway anomaly (OR = 4.5, P < .001), operation >160 minutes (OR = 2.2, P = .04), narcotic dose >0.3 mg/kg (OR = 2.3, P = .03), inexperienced provider (OR = 2.1, P = .02), and no paralytic reversal administration (OR = 2.0, P = .049); weight between 9 and 13 kg was protective (OR = 0.5, P = .04). Patients were risk-stratified according to individual profiles as low, average, high, or extreme risk (APE 2.5%-91.7%) with excellent risk discrimination (C-statistic = 0.79). Conclusions: APE incidence was 23.0% after palatoplasty, with a 37-fold higher incidence in extreme-risk patients. Individualized risk assessment tools may enhance perioperative clinical decision making to mitigate complications.

Author(s):  
Insook Cho ◽  
Eun-Hee Boo ◽  
Eunja Chung ◽  
David W. Bates ◽  
Patricia Dykes

BACKGROUND Electronic medical records (EMRs) contain a considerable amount of information about patients. The rapid adoption of EMRs and the integration of nursing data into clinical repositories have made large quantities of clinical data available for both clinical practice and research. OBJECTIVE In this study, we aimed to investigate whether readily available longitudinal EMR data including nursing records could be utilized to compute the risk of inpatient falls and to assess their accuracy compared with existing fall risk assessment tools. METHODS We used 2 study cohorts from 2 tertiary hospitals, located near Seoul, South Korea, with different EMR systems. The modeling cohort included 14,307 admissions (122,179 hospital days), and the validation cohort comprised 21,172 admissions (175,592 hospital days) from each of 6 nursing units. A probabilistic Bayesian network model was used, and patient data were divided into windows with a length of 24 hours. In addition, data on existing fall risk assessment tools, nursing processes, Korean Patient Classification System groups, and medications and administration data were used as model parameters. Model evaluation metrics were averaged using 10-fold cross-validation. RESULTS The initial model showed an error rate of 11.7% and a spherical payoff of 0.91 with a c-statistic of 0.96, which represent far superior performance compared with that for the existing fall risk assessment tool (c-statistic=0.69). The cross-site validation revealed an error rate of 4.87% and a spherical payoff of 0.96 with a c-statistic of 0.99 compared with a c-statistic of 0.65 for the existing fall risk assessment tool. The calibration curves for the model displayed more reliable results than those for the fall risk assessment tools alone. In addition, nursing intervention data showed potential contributions to reducing the variance in the fall rate as did the risk factors of individual patients. CONCLUSIONS A risk prediction model that considers longitudinal EMR data including nursing interventions can improve the ability to identify individual patients likely to fall.


Author(s):  
Eric B. Elbogen ◽  
Robert Graziano

Research has shown aggression toward others is a problem in a subset of military veterans. Predicting this kind of aggression would be immensily helpful in clinical settings. To our knowledge, there currently are no risk assessment tools or screens that have been validated to specifically evaluate acute violence among veterans. This chapter reviews what we do and do not know about violence in veterans so that clinicians who are making decisions about acute violence can be informed by the existing scientific knowledge base. Examining these empirically supported risk and protective factors using a systematic approach may optimize clinical decision making when assessing acute violence in veterans.


Aorta ◽  
2016 ◽  
Vol 04 (02) ◽  
pp. 42-60 ◽  
Author(s):  
T. Christian Gasser

AbstractAbdominal aortic aneurysm (AAA) rupture is a local event in the aneurysm wall that naturally demands tools to assess the risk for local wall rupture. Consequently, global parameters like the maximum diameter and its expansion over time can only give very rough risk indications; therefore, they frequently fail to predict individual risk for AAA rupture. In contrast, the Biomechanical Rupture Risk Assessment (BRRA) method investigates the wall’s risk for local rupture by quantitatively integrating many known AAA rupture risk factors like female sex, large relative expansion, intraluminal thrombus-related wall weakening, and high blood pressure. The BRRA method is almost 20 years old and has progressed considerably in recent years, it can now potentially enrich the diameter indication for AAA repair. The present paper reviews the current state of the BRRA method by summarizing its key underlying concepts (i.e., geometry modeling, biomechanical simulation, and result interpretation). Specifically, the validity of the underlying model assumptions is critically disused in relation to the intended simulation objective (i.e., a clinical AAA rupture risk assessment). Next, reported clinical BRRA validation studies are summarized, and their clinical relevance is reviewed. The BRRA method is a generic, biomechanics-based approach that provides several interfaces to incorporate information from different research disciplines. As an example, the final section of this review suggests integrating growth aspects to (potentially) further improve BRRA sensitivity and specificity. Despite the fact that no prospective validation studies are reported, a significant and still growing body of validation evidence suggests integrating the BRRA method into the clinical decision-making process (i.e., enriching diameter-based decision-making in AAA patient treatment).


2011 ◽  
Vol 35 (11) ◽  
pp. 413-418 ◽  
Author(s):  
Matthew M. Large ◽  
Olav B. Nielssen

SummaryRisk assessment has been widely adopted in mental health settings in the hope of preventing harms such as violence to others and suicide. However, risk assessment in its current form is mainly concerned with the probability of adverse events, and does not address the other component of risk – the extent of the resulting loss. Although assessments of the probability of future harm based on actuarial instruments are generally more accurate than the categorisations made by clinicians, actuarial instruments are of little assistance in clinical decision-making because there is no instrument that can estimate the probability of all the harms associated with mental illness, or estimate the extent of the resulting losses. The inability of instruments to distinguish between the risk of common but less serious harms and comparatively rare catastrophic events is a particular limitation of the value of risk categorisations. We should admit that our ability to assess risk is severely limited, and make clinical decisions in a similar way to those in other areas of medicine – by informed consideration of the potential consequences of treatment and non-treatment.


Author(s):  
Kim Kavanagh ◽  
Jiafeng Pan ◽  
Chris Robertson ◽  
Marion Bennie ◽  
Charis Marwick ◽  
...  

ABSTRACT ObjectivesThe use of “real-time” data to support individual patient management and outcome assessment requires the development of risk assessment models. This could be delivered through a learning health system by the building robust statistical analysis tools onto the existing linked data held by NHS Scotland’s Infection Intelligence Platform (IIP) and developed within the Scottish Healthcare Associated Infection Prevention Institute (SHAIPI). This project will create prediction models for the risk of acquiring a healthcare associated infection (HAI), and particular outcomes, at the point of GP consultation/ hospital admission which could aid clinical decision making. ApproachWe demonstrate the capability using the HAI Clostridium difficile (CDI) from 2010-2013. Using linked national individual level data on community prescribing, hospitalisations, infections and death records we extracted all cases of CDI and by comparing to matched population-based controls, examined the impact of prior hospital admissions, care home residence, comorbidities, exposure to gastric acid suppressive drugs and antibiotic exposure, defined as both cumulative (total defined daily dose (DDD)) and temporal antimicrobial exposure in the previous 6 months, to the risk of CDI acquisition. Antimicrobial exposure was considered for all drugs and the higher risk broad spectrum antibiotics (4Cs). Associations are assessed using conditional logistic regression. Using cross-validation we assess the ability of the model to accurately predict CDI infection. Risk scores for acquisition of CDI are estimated by combining these predictions with age and gender population incidence. ResultsIn the period 2010-2013 there were 1446 cases of CDI with matched 7964 controls. A significant dose-response relationship for exposure to any antimicrobial (1-7 DDDs OR=2.3 rising to OR=4.4 for 29+ DDDs) and, with elevated risk, to the 4C group (1-7 DDDs OR=3.8 rising to OR=17.9 for 29+ DDDs). Exposure elevates CDI risk most in the month after prescription but for 4C antimicrobials the elevated risk remains 6 months later (4C OR=12.4 within 1 month, OR=2.6 4-6 months later). The risk of CDI was also increased with more co-morbidities, previous hospitalisations, care home residency, increased number of prescriptions, and gastric acid suppression. ConclusionDespite limitations to current application in practice,(paucity of patient level in-hospital prescribing data and constraints of the timeliness of the data), when fully developed this system will enable risk classification to identify patients most at risk of HAI and adverse outcomes to aid clinical decision making.


Author(s):  
James B O'Keefe ◽  
Elizabeth J Tong ◽  
Thomas H Taylor ◽  
Ghazala D Datoo O'Keefe ◽  
David C Tong

Objective: To determine whether a risk prediction tool developed and implemented in March 2020 accurately predicts subsequent hospitalizations. Design: Retrospective cohort study, enrollment from March 24 to May 26, 2020 with follow-up calls until hospitalization or clinical improvement (final calls until June 19, 2020) Setting: Single center telemedicine program managing outpatients from a large medical system in Atlanta, Georgia Participants: 496 patients with laboratory-confirmed COVID-19 in isolation at home. Exclusion criteria included: (1) hospitalization prior to telemedicine program enrollment, (2) immediate discharge with no follow-up calls due to resolution. Exposure: Acute COVID-19 illness Main Outcome and Measures: Hospitalization was the outcome. Days to hospitalization was the metric. Survival analysis using Cox regression was used to determine factors associated with hospitalization. Results: The risk-assessment rubric assigned 496 outpatients to risk tiers as follows: Tier 1, 237 (47.8%); Tier 2, 185 (37.3%); Tier 3, 74 (14.9%). Subsequent hospitalizations numbered 3 (1%), 15 (7%), and 17 (23%) and for Tiers 1-3, respectively. From a Cox regression model with age ≥ 60, gender, and self-reported obesity as covariates, the adjusted hazard ratios using Tier 1 as reference were: Tier 2 HR=3.74 (95% CI, 1.06-13.27; P=0.041); Tier 3 HR=10.87 (95% CI, 3.09-38.27; P<0.001). Tier was the strongest predictor of time to hospitalization. Conclusions and Relevance: A telemedicine risk assessment tool prospectively applied to an outpatient population with COVID-19 identified both low-risk and high-risk patients with better performance than individual risk factors alone. This approach may be appropriate for optimum allocation of resources.


2021 ◽  
Author(s):  
Marc Snell ◽  
Arman Dehghani ◽  
Fabian Guenkzkofer ◽  
Stefan Kaltenbrunner

Musculoskeletal disorders continue to be a leading source of lost workdays across all industries. Common ergonomics assessment tools may include criteria extraneous to the stresses at specific companies or industries. Therefore, the creation of assessment tools, based on scientifically validated methods, with industry- or company-specific stresses may be of benefit. The BMW Group has developed the Safety and Ergonomics Risk Assessment (SERA) tool. This ergonomics assessment method incorporates the most up-to-date scientific methods and international standards, and is used worldwide in all production facilities of the BMW Group. As noted above, a major advantage of SERA over conventional ergonomics tools is the focus on ergonomics stresses common to automobile manufacturing and the consequent exclusion of irrelevant parameters, thereby reducing the time, effort, and training required for workplace assessments. Other advantages include the international uniformity of assessments and a web- and database-implementation allowing for easily comparable international reporting. The implementation of this method at the BMW Group has enabled a greater transparency for ergonomics across all international plants, and more effective and targeted ergonomics interventions. This publication will outline the basic motivation for SERA, highlight the relevant scientific sources and international standards, and general steps of an evaluation.


2020 ◽  
Vol 16 (9) ◽  
pp. e868-e874 ◽  
Author(s):  
Chris E. Holmes ◽  
Steven Ades ◽  
Susan Gilchrist ◽  
Daniel Douce ◽  
Karen Libby ◽  
...  

PURPOSE: Guidelines recommend venous thromboembolism (VTE) risk assessment in outpatients with cancer and pharmacologic thromboprophylaxis in selected patients at high risk for VTE. Although validated risk stratification tools are available, < 10% of oncologists use a risk assessment tool, and rates of VTE prophylaxis in high-risk patients are low in practice. We hypothesized that implementation of a systems-based program that uses the electronic health record (EHR) and offers personalized VTE prophylaxis recommendations would increase VTE risk assessment rates in patients initiating outpatient chemotherapy. PATIENTS AND METHODS: Venous Thromboembolism Prevention in the Ambulatory Cancer Clinic (VTEPACC) was a multidisciplinary program implemented by nurses, oncologists, pharmacists, hematologists, advanced practice providers, and quality partners. We prospectively identified high-risk patients using the Khorana and Protecht scores (≥ 3 points) via an EHR-based risk assessment tool. Patients with a predicted high risk of VTE during treatment were offered a hematology consultation to consider VTE prophylaxis. Results of the consultation were communicated to the treating oncologist, and clinical outcomes were tracked. RESULTS: A total of 918 outpatients with cancer initiating cancer-directed therapy were evaluated. VTE monthly education rates increased from < 5% before VTEPACC to 81.6% (standard deviation [SD], 11.9; range, 63.6%-97.7%) during the implementation phase and 94.7% (SD, 4.9; range, 82.1%-100%) for the full 2-year postimplementation phase. In the postimplementation phase, 213 patients (23.2%) were identified as being at high risk for developing a VTE. Referrals to hematology were offered to 151 patients (71%), with 141 patients (93%) being assessed and 93.8% receiving VTE prophylaxis. CONCLUSION: VTEPACC is a successful model for guideline implementation to provide VTE risk assessment and prophylaxis to prevent cancer-associated thrombosis in outpatients. Methods applied can readily translate into practice and overcome the current implementation gaps between guidelines and clinical practice.


2010 ◽  
Vol 30 (6) ◽  
pp. 595-607 ◽  
Author(s):  
Eric B. Elbogen ◽  
Sara Fuller ◽  
Sally C. Johnson ◽  
Stephanie Brooks ◽  
Patricia Kinneer ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document