Assessing Acute Risk of Violence in Military Veterans

Author(s):  
Eric B. Elbogen ◽  
Robert Graziano

Research has shown aggression toward others is a problem in a subset of military veterans. Predicting this kind of aggression would be immensily helpful in clinical settings. To our knowledge, there currently are no risk assessment tools or screens that have been validated to specifically evaluate acute violence among veterans. This chapter reviews what we do and do not know about violence in veterans so that clinicians who are making decisions about acute violence can be informed by the existing scientific knowledge base. Examining these empirically supported risk and protective factors using a systematic approach may optimize clinical decision making when assessing acute violence in veterans.

2010 ◽  
Vol 30 (6) ◽  
pp. 595-607 ◽  
Author(s):  
Eric B. Elbogen ◽  
Sara Fuller ◽  
Sally C. Johnson ◽  
Stephanie Brooks ◽  
Patricia Kinneer ◽  
...  

2018 ◽  
Vol 55 (4) ◽  
pp. 574-581 ◽  
Author(s):  
Marten N. Basta ◽  
John E. Fiadjoe ◽  
Albert S. Woo ◽  
Kenneth N. Peeples ◽  
Oksana A. Jackson

Objective: This study aimed to identify risk factors for adverse perioperative events (APEs) after cleft palatoplasty to develop an individualized risk assessment tool. Design: Retrospective cohort. Setting: Tertiary institutional. Patients: Patients younger than 2 years with cleft palate. Interventions: Primary Furlow palatoplasty between 2008 and 2011. Main Outcome Measure(s): Adverse perioperative event, defined as laryngo- or bronchospasm, accidental extubation, reintubation, obstruction, hypoxia, or unplanned intensive care unit admission. Results: Three hundred patients averaging 12.3 months old were included. Cleft distribution included submucous, 1%; Veau 1, 17.3%; Veau 2, 38.3%; Veau 3, 30.3%; and Veau 4, 13.0%. Pierre Robin (n = 43) was the most prevalent syndrome/anomaly. Eighty-three percent of patients received reversal of neuromuscular blockade, and total morphine equivalent narcotic dose averaged 0.19 mg/kg. Sixty-nine patients (23.0%) had an APE, most commonly hypoventilation (10%) and airway obstruction (8%). Other APEs included reintubation (4.7%) and laryngobronchospasm (3.3%). APE was associated with multiple intubation attempts (odds ratio [OR] = 6.6, P = .001), structural or functional airway anomaly (OR = 4.5, P < .001), operation >160 minutes (OR = 2.2, P = .04), narcotic dose >0.3 mg/kg (OR = 2.3, P = .03), inexperienced provider (OR = 2.1, P = .02), and no paralytic reversal administration (OR = 2.0, P = .049); weight between 9 and 13 kg was protective (OR = 0.5, P = .04). Patients were risk-stratified according to individual profiles as low, average, high, or extreme risk (APE 2.5%-91.7%) with excellent risk discrimination (C-statistic = 0.79). Conclusions: APE incidence was 23.0% after palatoplasty, with a 37-fold higher incidence in extreme-risk patients. Individualized risk assessment tools may enhance perioperative clinical decision making to mitigate complications.


Author(s):  
Rikito Hisamatsu ◽  
Rikito Hisamatsu ◽  
Kei Horie ◽  
Kei Horie

Container yards tend to be located along waterfronts that are exposed to high risk of storm surges. However, risk assessment tools such as vulnerability functions and risk maps for containers have not been sufficiently developed. In addition, damage due to storm surges is expected to increase owing to global warming. This paper aims to assess storm surge impact due to global warming for containers located at three major bays in Japan. First, we developed vulnerability functions for containers against storm surges using an engineering approach. Second, we simulated storm surges at three major bays using the SuWAT model and taking global warming into account. Finally, we developed storm surge risk maps for containers based on current and future situations using the vulnerability function and simulated inundation depth. As a result, we revealed the impact of global warming on storm surge risks for containers quantitatively.


2012 ◽  
Vol 61 (4) ◽  
pp. 662-663 ◽  
Author(s):  
Ian M. Thompson ◽  
Donna P. Ankerst

2021 ◽  
pp. 103985622098403
Author(s):  
Marianne Wyder ◽  
Manaan Kar Ray ◽  
Samara Russell ◽  
Kieran Kinsella ◽  
David Crompton ◽  
...  

Introduction: Risk assessment tools are routinely used to identify patients at high risk. There is increasing evidence that these tools may not be sufficiently accurate to determine the risk of suicide of people, particularly those being treated in community mental health settings. Methods: An outcome analysis for case serials of people who died by suicide between January 2014 and December 2016 and had contact with a public mental health service within 31 days prior to their death. Results: Of the 68 people who had contact, 70.5% had a formal risk assessment. Seventy-five per cent were classified as low risk of suicide. None were identified as being at high risk. While individual risk factors were identified, these did not allow to differentiate between patients classified as low or medium. Discussion: Risk categorisation contributes little to patient safety. Given the dynamic nature of suicide risk, a risk assessment should focus on modifiable risk factors and safety planning rather than risk prediction. Conclusion: The prediction value of suicide risk assessment tools is limited. The risk classifications of high, medium or low could become the basis of denying necessary treatment to many and delivering unnecessary treatment to some and should not be used for care allocation.


2021 ◽  
Vol 28 (1) ◽  
pp. e100251
Author(s):  
Ian Scott ◽  
Stacey Carter ◽  
Enrico Coiera

Machine learning algorithms are being used to screen and diagnose disease, prognosticate and predict therapeutic responses. Hundreds of new algorithms are being developed, but whether they improve clinical decision making and patient outcomes remains uncertain. If clinicians are to use algorithms, they need to be reassured that key issues relating to their validity, utility, feasibility, safety and ethical use have been addressed. We propose a checklist of 10 questions that clinicians can ask of those advocating for the use of a particular algorithm, but which do not expect clinicians, as non-experts, to demonstrate mastery over what can be highly complex statistical and computational concepts. The questions are: (1) What is the purpose and context of the algorithm? (2) How good were the data used to train the algorithm? (3) Were there sufficient data to train the algorithm? (4) How well does the algorithm perform? (5) Is the algorithm transferable to new clinical settings? (6) Are the outputs of the algorithm clinically intelligible? (7) How will this algorithm fit into and complement current workflows? (8) Has use of the algorithm been shown to improve patient care and outcomes? (9) Could the algorithm cause patient harm? and (10) Does use of the algorithm raise ethical, legal or social concerns? We provide examples where an algorithm may raise concerns and apply the checklist to a recent review of diagnostic imaging applications. This checklist aims to assist clinicians in assessing algorithm readiness for routine care and identify situations where further refinement and evaluation is required prior to large-scale use.


Sign in / Sign up

Export Citation Format

Share Document