The Safety and Efficacy of Verapamil Versus Diltiazem Continuous Infusion for Acute Rate Control of Atrial Fibrillation at an Academic Medical Center

2020 ◽  
pp. 001857872092538
Author(s):  
Charlotte M. Forshay ◽  
J. Michael Boyd ◽  
Alan Rozycki ◽  
Jeffrey Pilz

Purpose: Due to critical shortages of intravenous diltiazem in 2018, the Ohio State University Wexner Medical Center (OSUWMC) adopted intravenous verapamil as an alternative. However, there is a paucity of data supporting the use of intravenous verapamil infusions for rate control in the acute treatment of atrial arrhythmias. The purpose of this study was to determine the safety and efficacy of intravenous verapamil as compared with diltiazem for the acute treatment of atrial arrhythmias. Methods: This retrospective, case-control study compared patients who received verapamil infusions between June 1 and September 30, 2018, with patients who received diltiazem infusions between June 1 and September 30, 2017, at OSUWMC. Patients were matched 1:1 based on age, sex, and the presence of comorbid heart failure with reduced ejection fraction (≤40%). Results: A total of 73 patients who received at least 1 verapamil infusion and 73 patients who received at least 1 diltiazem infusion met inclusion criteria. The composite need for inotrope or vasopressor was similar for both groups (5% with verapamil versus 4% with diltiazem, P = .999). The rate of hypotension was similar between groups (37% versus 33% experiencing a systolic blood pressure <90 mm Hg, P = .603, and 27% versus 23% experiencing a mean arterial pressure <65 mm Hg, P = .704), as was the rate of bradycardia (19% versus 18%, P = .831). The efficacy outcomes of this study were similar for both groups, with 89% of patients in the verapamil group and 90% of patients in the diltiazem group achieving a heart rate less than 110 beats per minute ( P = .785). Conclusion: Intravenous verapamil and diltiazem infusions had similar safety and efficacy outcomes when used for acute treatment of atrial arrhythmias in the institutional setting.

2021 ◽  
Author(s):  
Robert P Lennon ◽  
Theodore J Demetriou ◽  
M Fahad Khalid ◽  
Lauren Jodi Van Scoy ◽  
Erin L Miller ◽  
...  

ABSTRACT Introduction Virtually all hospitalized coronavirus disease-2019 (COVID-19) outcome data come from urban environments. The extent to which these findings are generalizable to other settings is unknown. Coronavirus disease-2019 data from large, urban settings may be particularly difficult to apply in military medicine, where practice environments are often semi-urban, rural, or austere. The purpose of this study is compare presenting characteristics and outcomes of U.S. patients with COVID-19 in a nonurban setting to similar patients in an urban setting. Materials and Methods This is a retrospective case series of adults with laboratory-confirmed COVID-19 infection who were admitted to Hershey Medical Center (HMC), a 548-bed tertiary academic medical center in central Pennsylvania serving semi-urban and rural populations, from March 23, 2020, to April 20, 2020 (the first month of COVID-19 admissions at HMC). Patients and outcomes of this cohort were compared to published data on a cohort of similar patients from the New York City (NYC) area. Results The cohorts had similar age, gender, comorbidities, need for intensive care or mechanical ventilation, and most vital sign and laboratory studies. The NYC’s cohort had shorter hospital stays (4.1 versus 7.2 days, P &lt; .001) but more African American patients (23% versus 12%, P = .02) and higher prevalence of abnormal alanine (&gt;60U/L; 39.0% versus 5.9%, P &lt; .001) and aspartate (&gt;40U/L; 58.4% versus 42.4%, P = .012) aminotransferase, oxygen saturation &lt;90% (20.4% versus 7.2%, P = .004), and mortality (21% versus 1.4%, P &lt; .001). Conclusions Hospitalists in nonurban environments would be prudent to use caution when considering the generalizability of results from dissimilar regions. Further investigation is needed to explore the possibility of reproducible causative systemic elements that may help improve COVID-19-related outcomes. Broader reports of these relationships across many settings will offer military medical planners greater ability to consider outcomes most relevant to their unique settings when considering COVID-19 planning.


2021 ◽  
pp. 000348942110212
Author(s):  
Nathan Kemper ◽  
Scott B. Shapiro ◽  
Allie Mains ◽  
Noga Lipschitz ◽  
Joseph Breen ◽  
...  

Objective: Examine the effects of a multi-disciplinary skull base conference (MDSBC) on the management of patients seen for skull base pathology in a neurotology clinic. Methods: Retrospective case review of patients who were seen in a neurotology clinic at a tertiary academic medical center for pathology of the lateral skull base and were discussed at an MDSBC between July 2019 and February 2020. Patient characteristics, nature of the skull base pathology, and pre- and post-MDSBC plan of care was categorized. Results: A total of 82 patients with pathology of the lateral skull base were discussed at a MDSBC during an 8-month study period. About 54 (65.9%) had a mass in the internal auditory canal and/or cerebellopontine angle while 28 (34.1%) had other pathology of the lateral skull base. Forty-nine (59.8%) were new patients and 33 (40.2%) were established. The management plan changed in 11 (13.4%, 7.4-22.6 95% CI) patients as a result of the skull base conference discussion. The planned management changed from some form of treatment to observation in 4 patients, and changed from observation to some form of treatment in 4 patients. For 3 patients who underwent surgery, the planned approach was altered. Conclusions: For a significant proportion of patients with pathology of the lateral skull base, the management plan changed as a result of discussion at an MDSBC. Although participants of a MDSBC would agree of its importance, it is unclear how an MDSBC affects patient outcomes.


2020 ◽  
Vol 9 (11) ◽  
Author(s):  
Mark A. Munger ◽  
Yusuf Olğar ◽  
Megan L. Koleske ◽  
Heather L. Struckman ◽  
Jessica Mandrioli ◽  
...  

Background Atrial fibrillation (AF) is a comorbidity associated with heart failure and catecholaminergic polymorphic ventricular tachycardia. Despite the Ca 2+ ‐dependent nature of both of these pathologies, AF often responds to Na + channel blockers. We investigated how targeting interdependent Na + /Ca 2+ dysregulation might prevent focal activity and control AF. Methods and Results We studied AF in 2 models of Ca 2+ ‐dependent disorders, a murine model of catecholaminergic polymorphic ventricular tachycardia and a canine model of chronic tachypacing‐induced heart failure. Imaging studies revealed close association of neuronal‐type Na + channels (nNa v ) with ryanodine receptors and Na + /Ca 2+ exchanger. Catecholamine stimulation induced cellular and in vivo atrial arrhythmias in wild‐type mice only during pharmacological augmentation of nNa v activity. In contrast, catecholamine stimulation alone was sufficient to elicit atrial arrhythmias in catecholaminergic polymorphic ventricular tachycardia mice and failing canine atria. Importantly, these were abolished by acute nNa v inhibition (tetrodotoxin or riluzole) implicating Na + /Ca 2+ dysregulation in AF. These findings were then tested in 2 nonrandomized retrospective cohorts: an amyotrophic lateral sclerosis clinic and an academic medical center. Riluzole‐treated patients adjusted for baseline characteristics evidenced significantly lower incidence of arrhythmias including new‐onset AF, supporting the preclinical results. Conclusions These data suggest that nNa V s mediate Na + ‐Ca 2+ crosstalk within nanodomains containing Ca 2+ release machinery and, thereby, contribute to AF triggers. Disruption of this mechanism by nNa v inhibition can effectively prevent AF arising from diverse causes.


2019 ◽  
Vol 6 (Supplement_2) ◽  
pp. S364-S365
Author(s):  
Amy P Taylor ◽  
Kelci E Coe ◽  
Kurt Stevenson ◽  
Lynn Wardlow ◽  
Zeinab El Boghdadly ◽  
...  

Abstract Background The Infectious Diseases Society of America’s guideline for implementing antibiotic (abx) stewardship recommends routine review of abx use. Several studies demonstrate antibiotic time out (ATO) programs result in de-escalation, but there is limited evidence of improved outcomes. The aim of this study was to evaluate the clinical impact of ATO. Methods This retrospective study included hospitalized patients at The Ohio State University Wexner Medical Center receiving abx and a documented ATO from 7/1/2017 to 6/30/2018. ATO patients were matched by infection type to abx-treated patients lacking an ATO note. Patients were excluded if they were identified as a protected population, were in the ICU at the time of ATO, had an ATO within 48 hours of discharge, cystic fibrosis, or febrile neutropenia. The primary objective was to evaluate abx optimization in patients with documented ATO vs. those without ATO. Abx optimization was defined as the selection of ideal abx based on guidelines, culture and susceptibility results, or expert opinion when undefined. Secondary outcomes included vancomycin-associated acute kidney injury (VAN-AKI), infection-related length of stay (LOS), all-cause 30-day readmission or mortality, abx days, and nosocomial C. difficile infection (CDI) rates. The Student t-test/Fisher’s exact test and Wilcoxon-rank sum were utilized as appropriate. Results One hundred ATO patients were compared with 100 non-ATO patients. Baseline characteristics and infection types were similar between groups. ATO resulted in improved optimization of abx selection (P = 0.05) and duration (P < 0.01), and reduced piperacillin/tazobactam (P/T) and vancomycin (VAN) utilization. No difference was observed in VAN-AKI (22 vs. 20%, P = 0.73), 30-day readmission (28 vs. 27%, P = 0.87), mortality (5 vs. 5%, P = 1), or CDI rates (6 vs. 5%, p = 0.76) in the ATO vs. non-ATO group. However, inpatient abx days (12 vs. 8, P = 0.004) and infection-related LOS (10 vs. 8, P = 0.0006) were shorter in the non-ATO group. Conclusion ATO improved optimization of abx selection and duration, and reduced P/T and VAN use. Despite this, clinical outcomes were not improved. Disclosures All authors: No reported disclosures.


2020 ◽  
Vol 36 (3) ◽  
pp. 102-109
Author(s):  
Tahnia Alauddin ◽  
Sarah E. Petite

Background: Contraindications and precautions to metformin have limited inpatient use, and limited evidence exists evaluating metformin in hospitalized patients. Objective: This study aimed to determine the safety and efficacy of inpatient metformin use. Methods: This study was an observational, retrospective, cohort study at an academic medical center between June 1, 2016, and May 31, 2018. Hospitalized adults with type 2 diabetes mellitus receiving at least 1 metformin dose were included. The primary endpoint was to identify hospitalized patients using metformin with at least 1 contraindication or precautionary warning against use. Secondary endpoints included assessing metformin efficacy with glycemic control, characterizing adverse outcomes of inpatient metformin, and comparing the efficacy of metformin-containing regimens. Results: Two hundred patients were included. There were 126 incidences of potentially unsafe use identified in 111 patients (55.5%). The most common reasons were age ≥65 years (47%), heart failure diagnosis (7.5%), and metformin within 48 hours of contrast (6%). Metformin was contraindicated in 2 patients (1%) with an estimated glomerular filtration rate ≤30 mL/min/1.73 m2. The overall median daily blood glucose was 146 mg/dL (interquartile range [IQR] = 122-181). Patients were divided into 3 groups: metformin monotherapy, metformin plus oral antihyperglycemic therapy, and metformin plus insulin. The median daily blood glucoses were 129 mg/dL (IQR = 110-152), 154 mg/dL (IQR = 133-178), and 174 mg/dL (IQR = 142-203; P < .001), respectively. Two patients (1%) developed acute kidney injury, and no patients developed lactic acidosis. Conclusions: Metformin was associated with goal glycemic levels in hospitalized patients with no adverse outcomes. These results suggest the potential for metformin use in hospitalized, non–critically ill patients.


2005 ◽  
Vol 133 (4) ◽  
pp. 551-555 ◽  
Author(s):  
Feodor Ung ◽  
Raj Sindwani ◽  
Ralph Metson

OBJECTIVES: Patients who fail endoscopic drainage procedures for chronic frontal sinusitis often require obliteration of the frontal sinus with abdominal fat. The purpose of this study was to evaluate an endoscopic technique for frontal sinus obliteration. STUDY DESIGN AND SETTING: Retrospective case-control. Thirty-five patients underwent frontal sinus obliteration using either an endoscopic (n = 10) or conventional osteoplastic flap (n = 25) technique from 1994 to 2004 at an academic medical center. RESULTS: Patients undergoing endoscopic obliteration had less blood loss (P = 0.006), decreased operative time (P = 0.016), and a shorter hospital stay (P = 0.003) compared to osteoplastic control subjects. All 3 surgical complications occurred in the control group. No patients required additional surgery for frontal sinusitis. CONCLUSIONS: The endoscopic approach to frontal sinus obliteration appears to reduce patient morbidity and should be considered in the surgical management of advanced frontal sinus disease. SIGNIFICANCE: This is the first report of a minimally-invasive technique for frontal sinus obliteration.


2020 ◽  
pp. 019459982096915
Author(s):  
Jaxon W. Jordan ◽  
Christopher Spankovich ◽  
Scott P. Stringer

Objective The objective of our study was to review the current literature pertaining to perioperative opioids in sinus surgery and to determine the effects of implementing opioid stewardship recommendations in the setting of endoscopic sinonasal surgery. Study Design Single-institution retrospective case-control study. Setting Academic medical center outpatient area. Methods This retrospective review comprised 163 patients who underwent routine functional endoscopic sinus surgery, septoplasty, and/or inferior turbinate reduction before and after implementation of a standardized pain control regimen based on published opioid stewardship recommendations. The regimen consisted of an oral dose of gabapentin (400 mg) and acetaminophen (1000 mg) at least 30 minutes prior to surgery, absorbable nasal packing soaked in 0.5% tetracaine intraoperatively, and a postoperative regimen of acetaminophen and nonsteroidal anti-inflammatory medications. Tramadol tablets (50 mg) were prescribed postoperatively for breakthrough pain. The primary outcome measure for the study was the average number of hydrocodone equivalents (5 mg) prescribed before and after the new protocol. Results The average number of opioid medications prescribed, measured as hydrocodone equivalents (5 mg), decreased from 24.59 preprotocol to 18.08 after the initiation of the new perioperative regimen ( P < .001). There was no significant difference between the periods ( P > .05) in number of postoperative phone calls regarding pain or in patient satisfaction scores. Conclusion Opioid stewardship recommendations can be instituted for sinonasal surgery, including multimodal perioperative pain management and substitution of tramadol for breakthrough pain, as a method to decrease the volume of opioids prescribed, without increasing patient phone calls or affecting the likelihood of physician recommendation Press Ganey scores.


Author(s):  
Priya H. Dedhia ◽  
Kallie Chen ◽  
Yiqiang Song ◽  
Eric LaRose ◽  
Joseph R. Imbus ◽  
...  

Abstract Objective Natural language processing (NLP) systems convert unstructured text into analyzable data. Here, we describe the performance measures of NLP to capture granular details on nodules from thyroid ultrasound (US) reports and reveal critical issues with reporting language. Methods We iteratively developed NLP tools using clinical Text Analysis and Knowledge Extraction System (cTAKES) and thyroid US reports from 2007 to 2013. We incorporated nine nodule features for NLP extraction. Next, we evaluated the precision, recall, and accuracy of our NLP tools using a separate set of US reports from an academic medical center (A) and a regional health care system (B) during the same period. Two physicians manually annotated each test-set report. A third physician then adjudicated discrepancies. The adjudicated “gold standard” was then used to evaluate NLP performance on the test-set. Results A total of 243 thyroid US reports contained 6,405 data elements. Inter-annotator agreement for all elements was 91.3%. Compared with the gold standard, overall recall of the NLP tool was 90%. NLP recall for thyroid lobe or isthmus characteristics was: laterality 96% and size 95%. NLP accuracy for nodule characteristics was: laterality 92%, size 92%, calcifications 76%, vascularity 65%, echogenicity 62%, contents 76%, and borders 40%. NLP recall for presence or absence of lymphadenopathy was 61%. Reporting style accounted for 18% errors. For example, the word “heterogeneous” interchangeably referred to nodule contents or echogenicity. While nodule dimensions and laterality were often described, US reports only described contents, echogenicity, vascularity, calcifications, borders, and lymphadenopathy, 46, 41, 17, 15, 9, and 41% of the time, respectively. Most nodule characteristics were equally likely to be described at hospital A compared with hospital B. Conclusions NLP can automate extraction of critical information from thyroid US reports. However, ambiguous and incomplete reporting language hinders performance of NLP systems regardless of institutional setting. Standardized or synoptic thyroid US reports could improve NLP performance.


2016 ◽  
Vol 3 (suppl_1) ◽  
Author(s):  
Benjamen Pennell ◽  
Cory Hussain ◽  
Nicole Theodoropoulos ◽  
Julie Mangino ◽  
Crystal Tubbs ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document