Abstract P547: Cardiac Arrest in Patients Who Present With Subarachnoid Hemorrhage is Associated With Worse Outcomes

Stroke ◽  
2021 ◽  
Vol 52 (Suppl_1) ◽  
Author(s):  
Gurkamal Kaur ◽  
Jose Dominguez ◽  
Rosa Semaan ◽  
Leanne Fuentes ◽  
Jonathan Ogulnick ◽  
...  

Introduction: Subarachnoid hemorrhage (SAH) can be a devastating neurologic condition that leads to cardiac arrest (CA), and ultimately poor clinical outcomes. Existing literature on this subject reveal a dismal prognosis when analyzing relatively small sample sizes. We aimed to further elucidate the incidence, mortality rates, and outcomes of CA patients with SAH using large-scale population data. Methods: A retrospective cohort study was conducted using the National Inpatient Sample (NIS) database. Patients included in the study met criteria using International Classification of Diseases (ICD) codes 9th and 10th edition of: non-traumatic SAH, CA cause unspecified, and CA due to other underlying conditions between 2008 and 2014. For all regression analyses, a p-value of <0.05 was considered statistically significant. Results: We identified 170,869 patients hospitalized for non-traumatic SAH. Within these, there was a 3.17% incidence of CA. The mortality rate in CA with SAH was 82% (vs non-CA 18.4%, p< 0.001). Of the survivors of CA with SAH, 15.7% were discharged to special facilities and services (vs non-CA 37.6%, p<0.0001). The remaining 2.3% were discharged home (vs non-CA 44.0%, p<.0001). Higher NIS SAH severity score (NIS-SSS) was a predictor of CA in SAH patients (p <.0001). Patients treated with aneurysm clipping and coiling had lower odds ratio of CA (p <.0001). Conclusion: The study confirms the poor prognosis of patients with CA and SAH using large-scale population data. Patients that underwent aneurysm treatment show lower association with CA. Findings presented here provide useful data for clinical decision making and guiding goals of care discussion with family members. Further studies may identify interventions and protocols for treatment of these severely ill patients.

2021 ◽  
Vol 28 (1) ◽  
pp. e100251
Author(s):  
Ian Scott ◽  
Stacey Carter ◽  
Enrico Coiera

Machine learning algorithms are being used to screen and diagnose disease, prognosticate and predict therapeutic responses. Hundreds of new algorithms are being developed, but whether they improve clinical decision making and patient outcomes remains uncertain. If clinicians are to use algorithms, they need to be reassured that key issues relating to their validity, utility, feasibility, safety and ethical use have been addressed. We propose a checklist of 10 questions that clinicians can ask of those advocating for the use of a particular algorithm, but which do not expect clinicians, as non-experts, to demonstrate mastery over what can be highly complex statistical and computational concepts. The questions are: (1) What is the purpose and context of the algorithm? (2) How good were the data used to train the algorithm? (3) Were there sufficient data to train the algorithm? (4) How well does the algorithm perform? (5) Is the algorithm transferable to new clinical settings? (6) Are the outputs of the algorithm clinically intelligible? (7) How will this algorithm fit into and complement current workflows? (8) Has use of the algorithm been shown to improve patient care and outcomes? (9) Could the algorithm cause patient harm? and (10) Does use of the algorithm raise ethical, legal or social concerns? We provide examples where an algorithm may raise concerns and apply the checklist to a recent review of diagnostic imaging applications. This checklist aims to assist clinicians in assessing algorithm readiness for routine care and identify situations where further refinement and evaluation is required prior to large-scale use.


Author(s):  
Elizabeth A. Simpson ◽  
David A. Skoglund ◽  
Sarah E. Stone ◽  
Ashley K. Sherman

Objective This study aimed to determine the factors associated with positive infant drug screen and create a shortened screen and a prediction model. Study Design This is a retrospective cohort study of all infants who were tested for drugs of abuse from May 2012 through May 2014. The primary outcome was positive infant urine or meconium drug test. Multivariable logistic regression was used to identify independent risk factors. A combined screen was created, and test characteristics were analyzed. Results Among the 3,861 live births, a total of 804 infants underwent drug tests. Variables associated with having a positive infant test were (1) positive maternal urine test, (2) substance use during pregnancy, (3) ≤ one prenatal visit, and (4) remote substance abuse; each p-value was less than 0.0001. A model with an indicator for having at least one of these four predictors had a sensitivity of 94% and a specificity of 69%. Application of this screen to our population would have decreased drug testing by 57%. No infants had a positive urine drug test when their mother's urine drug test was negative. Conclusion This simplified screen can guide clinical decision making for determining which infants should undergo drug testing. Infant urine drug tests may not be needed when a maternal drug test result is negative. Key Points


Circulation ◽  
2020 ◽  
Vol 142 (Suppl_3) ◽  
Author(s):  
Justinn Tanem ◽  
John Scott ◽  
George M Hoffman ◽  
Robert A Niebler ◽  
Aoy TOMITA-MITCHELL ◽  
...  

Introduction: Preoperative risk stratification in congenital cardiac surgery includes patient and procedure related factors, which may be used in clinical decision making as well program performance evaluation. Despite these tools, unidentified factors contribute to wide variation in outcomes both within and between centers. Identification of latent physiologic risk factors may strengthen predictive models. Hypothesis: Total cell-free DNA (TCF) functions as a biomarker for cellular injury as well as a pro-inflammatory cytokine. We hypothesized that elevated preoperative TCF would be associated with poor outcome following pediatric cardiac surgery requiring cardiopulmonary bypass (CPB). Methods: Prospective observational study of children age < 18 yr and wt > 3 kg undergoing planned CPB surgery. The Children’s Wisconsin Institutional Review Board approved the protocol . A serum TCF sample was obtained after induction of anesthesia prior to surgical incision. The primary outcome measure was a composite of postoperative cardiac arrest, ECMO, or death (CAED). Association of outcome to TCF was assessed by logistic regression with a cutpoint chosen by ROC curve exploration. Odds ratios with 95% CI were calculated. Results: Data were available in 117 patients, median age 0.9 years (range 0-17.4), median weight 7.8kg (range 3.2-98). The primary outcome (CAED) was met in 6/117 (5.1%). Table 1 summarizes characteristics of patients with and without CAED. Risk of CAED was 2% with TCF<20 ng/ml, and 27% with TCF>20 ng/ml (OR=18.2, CI 2.2- 212, p<0.01). Elevated TCF was associated to fewer hospital free days (GLM p<0.01). Data in table reported as median [IQR]. Conclusions: Preoperative TCF has an important association with postoperative cardiac arrest, ECMO, and death. Alternative or intensified treatment strategies could be considered in patients with elevated preoperative TCF.


2005 ◽  
Vol 28 (2) ◽  
pp. 90-96 ◽  
Author(s):  
C. Pollock

Peritoneal sclerosis is an almost invariable consequence of peritoneal dialysis. In most circumstances it is “simple” sclerosis, manifesting clinically with an increasing peritoneal transport rate and loss of ultrafiltration capacity. In contrast, encapsulating peritoneal sclerosis is a life threatening and usually irreversible condition, associated with bowel obstruction, malnutrition and death. It is unknown whether common etiological factors underlie the development of these 2 clinically and pathologically distinct forms of peritoneal sclerosis. The majority of studies to date have investigated factors that contribute to “simple” sclerosis, although it remains possible that similar mechanisms are amplified in patients who develop encapsulated peritoneal sclerosis. The cellular elements that promote peritoneal sclerosis include the mesothelial cells, peritoneal fibroblasts and inflammatory cells. Factors that stimulate these cells to promote peritoneal fibrosis and neoangiogenesis, both inherent in the development of peritoneal sclerosis, include cytokines that are induced by exposure of the peritoneal membrane to high concentrations of glucose, advanced glycation of the peritoneal membrane and oxidative stress. The cumulative exposure to bioincompatible dialysate is likely to have an etiological role as the duration of dialysis correlates with the likelihood of developing peritoneal sclerosis. Indeed peritoneal dialysis using more biocompatible fluids has been shown to reduce the development of peritoneal sclerosis. The individual contribution of the factors implicated in the development of peritoneal sclerosis will only be determined by large scale peritoneal biopsy registries, which will be able to prospectively incorporate clinical and histological data and support clinical decision making.


2020 ◽  
Vol 3 (Supplement_1) ◽  
pp. 28-30
Author(s):  
A Kundra ◽  
T Ritchie ◽  
M Ropeleski

Abstract Background Fecal Calprotectin (FC) is helpful in distinguishing functional from organic bowel disease. Also, it has proven useful in monitoring disease activity in inflammatory bowel disease (IBD). The uptake of its use in clinical practice has increased considerably, though access varies significantly. Studies exploring current practice patterns among GI specialists and how to optimize its use are limited. In 2017, Kingston Health Sciences Centre (KHSC) began funding FC testing at no cost to patients. Aims We aimed to better understand practice patterns of gastroenterologists in IBD patients where there is in house access to FC assays, and to generate hypotheses regarding its optimal use in IBD monitoring. We hypothesize that FC is not being used in a regular manner for monitoring of IBD patients. Methods A retrospective chart audit study was done on all KHSC patients who had FC testing completed from 2017–2018. Qualitative data was gathered from dictated reports using rigorous set definitions regarding indication for the test, change in clinical decision making, and frequency patterns of testing. Specifically, change in use for colonoscopy or in medical therapy was coded only if the dictated note was clear that a decision hinged largely on the FC result. Frequency of testing was based on test order date. Reactive testing was coded as tests ordered to confirm a clinical flare. Variable testing was coded where monitoring tests that varied in intervals greater than 3 months and crossed over the other set frequency codes. Quantitative data regarding FC test values, and dates were also collected. This data was then analyzed using descriptive statistics. Results Of the 834 patients in our study, 7 were under 18 years old and excluded. 562(67.34%) of these patients had a pre-existing diagnosis of IBD; 193 (34%) with Ulcerative Colitis (UC), 369 (66%) with Crohn’s Disease (CD). FC testing changed the clinician’s decision for medical therapy in 12.82% of cases and use for colonoscopy 13.06% of the time for all comers. Of the FC tests, 79.8% were sent in a variable frequency pattern and 2.68% with reactive intent. The remaining 17.5% were monitored with a regular pattern, with 8.57% patients having their FC monitored at regular intervals greater than 6 months, 7.68% every 6 months, and 1.25% less than 6 months. The average FC level of patients with UC was 356.2ug/ml and 330.6 ug/ml for CD. The mean time interval from 1st to 2nd test was 189.6 days. Conclusions FC testing changed clinical decisions regarding medical therapy and use for colonoscopy about 13% of the time. FC testing was done variably 79.8% of the time, where as 17.5% of patients had a regular FC monitoring schedule. An optimal monitoring interval for IBD flares using FC for maximal clinical benefit has yet to be determined. Large scale studies will be required to answer this question. Funding Agencies None


2012 ◽  
Vol 30 (15_suppl) ◽  
pp. 4075-4075
Author(s):  
Mark Lewis ◽  
Harry H. Yoon ◽  
Qian Shi ◽  
Robert B. Diasio ◽  
Frank A. Sinicrope

4075 Background: EAC recurrence after surgery with curative intent is believed to carry a uniformly dismal prognosis that may discourage further therapy. To date, the post-recurrence survival of patients has not been examined in EAC. Our aim was to examine site of recurrence in relation to outcome in EAC patients after surgery. Methods: Among EAC patients (N = 796) rendered margin-free at surgery performed at Mayo Clinic, most were T3-4 and lymph node (LN)-positive; none received neoadjuvant therapy. The patient subset who had documented disease recurrence (N = 401) formed the current study population. Cox models were used to examine overall survival (OS) post-recurrence. Results: Among patients with recurrence, median time to recurrence (TTR) was 11 months. Site of recurrence included loco-regional (regional LNs, esophagogastric, anastomosis), chest, abdomen, or distant sites in 97 (27%), 144 (40%), 181 (50%), and 88 (24%) patients, respectively. Most recurrences (66%) were limited to one site. Chest-involved recurrence was significantly associated with improved OS (hazard ratio [HR] 0.78, P = .047), even after adjusting for TTR, number of recurrence sites, tumor pathology, and palliative chemotherapy. This result was confirmed when multivariate analysis was restricted to patients who had only 1 recurrence site (Table) or who had biopsy-proven recurrence (P = .080). In separate models, abdomen-involved (HR = 1.3, P = .016) or bone-involved (HR = 1.6, P= .008) recurrences were independently associated with worse OS. Conclusions: Chest-involved recurrence of EAC independently predicts for improved survival, whereas abdominal and bony sites of recurrence predict for worse outcome. Primary tumor grade and node number were durable prognosticators after recurrence. These novel data provide useful prognostic information and have the potential to influence clinical decision-making. [Table: see text]


2021 ◽  
Vol 72 ◽  
pp. 429-474
Author(s):  
Greg M. Silverman ◽  
Himanshu S. Sahoo ◽  
Nicholas E. Ingraham ◽  
Monica Lupei ◽  
Michael A. Puskarich ◽  
...  

Statistical modeling of outcomes based on a patient's presenting symptoms (symptomatology) can help deliver high quality care and allocate essential resources, which is especially important during the COVID-19 pandemic. Patient symptoms are typically found in unstructured notes, and thus not readily available for clinical decision making. In an attempt to fill this gap, this study compared two methods for symptom extraction from Emergency Department (ED) admission notes. Both methods utilized a lexicon derived by expanding The Center for Disease Control and Prevention's (CDC) Symptoms of Coronavirus list. The first method utilized a word2vec model to expand the lexicon using a dictionary mapping to the Uni ed Medical Language System (UMLS). The second method utilized the expanded lexicon as a rule-based gazetteer and the UMLS. These methods were evaluated against a manually annotated reference (f1-score of 0.87 for UMLS-based ensemble; and 0.85 for rule-based gazetteer with UMLS). Through analyses of associations of extracted symptoms used as features against various outcomes, salient risks among the population of COVID-19 patients, including increased risk of in-hospital mortality (OR 1.85, p-value < 0.001), were identified for patients presenting with dyspnea. Disparities between English and non-English speaking patients were also identified, the most salient being a concerning finding of opposing risk signals between fatigue and in-hospital mortality (non-English: OR 1.95, p-value = 0.02; English: OR 0.63, p-value = 0.01). While use of symptomatology for modeling of outcomes is not unique, unlike previous studies this study showed that models built using symptoms with the outcome of in-hospital mortality were not significantly different from models using data collected during an in-patient encounter (AUC of 0.9 with 95% CI of [0.88, 0.91] using only vital signs; AUC of 0.87 with 95% CI of [0.85, 0.88] using only symptoms). These findings indicate that prognostic models based on symptomatology could aid in extending COVID-19 patient care through telemedicine, replacing the need for in-person options. The methods presented in this study have potential for use in development of symptomatology-based models for other diseases, including for the study of Post-Acute Sequelae of COVID-19 (PASC).


2020 ◽  
Author(s):  
philippe delmas ◽  
Assunta fiorentino ◽  
matteo antonini ◽  
severine Vuilleumier ◽  
guy Stotzer ◽  
...  

Abstract Background: Patient safety is a top priority of the health professions. In emergency departments, the clinical decision making of triage nurses must be of the highest reliability. However, studies have repeatedly found that nurses over- or undertriage a considerable portion of cases, which can have major consequences for patient management. Among the factors that might explain this inaccuracy, workplace distractors have been pointed to without ever being the focus of specific investigation, owing in particular to the challenge of assessing them in care settings. Consequently, the use of a serious game reproducing a work environment comprising distractors would afford a unique opportunity to explore their impact on the quality of nurse emergency triage. Methods/Design : A factorial design will be used to test the acceptability and feasibility of a serious game created to explore the primary effects of distractors on emergency nurse triage accuracy. A sample of 80 emergency nurses will be randomised across three experimental groups exposed to different distractor conditions and one control group not exposed to distractors. Specifically, experimental group A will be exposed to noise distractors only; experimental group B to task interruptions only; and experimental group C to both types combined. Each group will engage in the serious game to complete 20 clinical vignettes in two hours. For each clinical vignette, a gold standard will be determined by experts. Pre-tests will be planned with clinicians and specialised emergency nurses to examine their interaction with the first version of the serious game. Discussion : This study will shed light on the acceptability and feasibility of a serious game in the field of emergency triage. It will also advance knowledge of the possible effects of exposure to common environmental distractors on nurse triage accuracy. Finally, this pilot study will inform planned large-scale studies of emergency nurse practice using serious games.


2018 ◽  
Vol 2 (2) ◽  

Background: Bone marrow aspiration and biopsy is one of the most important diagnostic tools for evaluation of undifferentiated fever. The positivity yield of these samples is highly specific that provides additional evidence for clinical decision making among the undifferentiated febrile cases. With this background we evaluated the bone marrow results of undifferentiated febrile cases for the last five years at B.P. Koirala Institute of Health Sciences, Dharan, Nepal. The objective of the study was to measure the sensitivity of the bone marrow investigations among undifferentiated febrile cohort. Methods: A retrospective study was performed from January 2010 till December 2014 evaluating bone marrow reports. Completed request forms and the histopathological reports of the bone marrow specimens were reviewed. Statistical data was analyzed using SPSS 17 and p-value of <0.05 was considered significant. Results: Over the half decade 319 specimens were collected for bone marrow biopsy out of that 27% were requested for undifferentiated fever. The mean and median age of the biopsy performed patients was 35 and 31 years respectively. Among all biopsy samples 59% was adequate for evaluation however among the undifferentiated febrile cases biopsy samples only 45% was adequate for evaluation. The sensitivity of bone marrow biopsy was 34%. There were 714 bone marrow aspiration samples of that 84% was adequate for evaluation. The most common etiological diagnosis for the undifferentiated fever from the marrow evaluation was visceral leishmaniasis (53%). The sensitivity of the bone marrow aspiration and aspiration or biopsy for visceral leishmaniasis was 95% and 98% respectively. (p value 0.03) Conclusion: Bone marrow aspiration is highly sensitive and specific for the diagnosis of visceral leishmaniasis among the undifferentiated fever at tropics in Nepal.


Sign in / Sign up

Export Citation Format

Share Document