Journal of the American Heart Association
Latest Publications





Published By Ovid Technologies Wolters Kluwer -American Heart Association

2047-9980, 2047-9980
Updated Friday, 03 December 2021

Praloy Chakraborty ◽  
Adrian M. Suszko ◽  
Karthik Viswanathan ◽  
Kimia Sheikholeslami ◽  
Danna Spears ◽  

Background Unlike T‐wave alternans (TWA), the relation between QRS alternans (QRSA) and ventricular arrhythmia (VA) risk has not been evaluated in hypertrophic cardiomyopathy (HCM). We assessed microvolt QRSA/TWA in relation to HCM risk factors and late VA outcomes in HCM. Methods and Results Prospectively enrolled patients with HCM (n=130) with prophylactic implantable cardioverter‐defibrillators underwent digital 12‐lead ECG recordings during ventricular pacing (100–120 beats/min). QRSA/TWA was quantified using the spectral method. Patients were categorized as QRSA+ and/or TWA+ if sustained alternans was present in ≥2 precordial leads. The VA end point was appropriate implantable cardioverter‐defibrillator therapy over 5 years of follow‐up. QRSA+ and TWA+ occurred together in 28% of patients and alone in 7% and 7% of patients, respectively. QRSA magnitude increased with pacing rate (1.9±0.6 versus 6.2±2.0 µV; P =0.006). Left ventricular thickness was greater in QRSA+ than in QRSA− patients (22±7 versus 20±6 mm; P =0.035). Over 5 years follow‐up, 17% of patients had VA. The annual VA rate was greater in QRSA+ versus QRSA− patients (5.8% versus 2.0%; P =0.006), with the QRSA+/TWA− subgroup having the greatest rate (13.3% versus 2.6%; P <0.001). In those with <2 risk factors, QRSA− patients had a low annual VA rate compared QRSA+ patients (0.58% versus 7.1%; P =0.001). Separate Cox models revealed QRSA+ (hazard ratio [HR], 2.9 [95% CI, 1.2–7.0]; P =0.019) and QRSA+/TWA− (HR, 7.9 [95% CI, 2.9–21.7]; P <0.001) as the most significant VA predictors. TWA and HCM risk factors did not predict VA. Conclusions In HCM, microvolt QRSA is a novel, rate‐dependent phenomenon that can exist without TWA and is associated with greater left ventricular thickness. QRSA increases VA risk 3‐fold in all patients, whereas the absence of QRSA confers low VA risk in patients with <2 risk factors. Registration URL: ; Unique identifier: NCT02560844.

Taha Sen ◽  
Jingwei Li ◽  
Brendon L. Neuen ◽  
Clare Arnott ◽  
Bruce Neal ◽  

Background Studies have suggested that sodium glucose co‐transporter 2 inhibitors exert anti‐inflammatory effects. We examined the association of baseline growth differentiation factor‐15 (GDF‐15), a marker of inflammation and cellular injury, with cardiovascular events, hospitalization for heart failure (HF), and kidney outcomes in patients with type 2 diabetes in the CANVAS (Canagliflozin Cardiovascular Assessment Study) and determined the effect of the sodium glucose co‐transporter 2 inhibitor canagliflozin on circulating GDF‐15. Methods and Results The CANVAS trial randomized 4330 people with type 2 diabetes at high cardiovascular risk to canagliflozin or placebo. The association between baseline GDF‐15 and cardiovascular (non‐fatal myocardial infarction, non‐fatal stroke, cardiovascular death), HF, and kidney (40% estimated glomerular filtration rate decline, end‐stage kidney disease, renal death) outcomes was assessed using multivariable adjusted Cox regression models. During median follow‐up of 6.1 years (N=3549 participants with available samples), 555 cardiovascular, 129 HF, and 137 kidney outcomes occurred. Each doubling in baseline GDF‐15 was significantly associated with a higher risk of cardiovascular (hazard ratio [HR], 1.2; 95% CI, 1.0‒1.3), HF (HR, 1.5; 95% CI, 1.2‒2.0) and kidney (HR, 1.5; 95% CI, 1.2‒2.0) outcomes. Baseline GDF‐15 did not modify canagliflozin’s effect on cardiovascular, HF, and kidney outcomes. Canaglifozin treatment modestly lowered GDF‐15 compared with placebo; however, GDF‐15 did not mediate the protective effect of canagliflozin on cardiovascular, HF, or kidney outcomes. Conclusions In patients with type 2 diabetes at high cardiovascular risk, higher GDF‐15 levels were associated with a higher risk of cardiovascular, HF, and kidney outcomes. Canagliflozin modestly lowered GDF‐15, but GDF‐15 reduction did not mediate the protective effect of canagliflozin.

Anna Vögele ◽  
Michiel Jan van Veelen ◽  
Tomas Dal Cappello ◽  
Marika Falla ◽  
Giada Nicoletto ◽  

Background Helicopter emergency medical services personnel operating in mountainous terrain are frequently exposed to rapid ascents and provide cardiopulmonary resuscitation (CPR) in the field. The aim of the present trial was to investigate the quality of chest compression only (CCO)‐CPR after acute exposure to altitude under repeatable and standardized conditions. Methods and Results Forty‐eight helicopter emergency medical services personnel were divided into 12 groups of 4 participants; each group was assigned to perform 5 minutes of CCO‐CPR on manikins at 2 of 3 altitudes in a randomized controlled single‐blind crossover design (200, 3000, and 5000 m) in a hypobaric chamber. Physiological parameters were continuously monitored; participants rated their performance and effort on visual analog scales. Generalized estimating equations were performed for variables of CPR quality (depth, rate, recoil, and effective chest compressions) and effects of time, altitude, carryover, altitude sequence, sex, qualification, weight, preacclimatization, and interactions were analyzed. Our trial showed a time‐dependent decrease in chest compression depth ( P =0.036) after 20 minutes at altitude; chest compression depth was below the recommended minimum of 50 mm after 60 to 90 seconds (49 [95% CI, 46–52] mm) of CCO‐CPR. Conclusions This trial showed a time‐dependent decrease in CCO‐CPR quality provided by helicopter emergency medical services personnel during acute exposure to altitude, which was not perceived by the providers. Our findings suggest a reevaluation of the CPR guidelines for providers practicing at altitudes of 3000 m and higher. Mechanical CPR devices could be of help in overcoming CCO‐CPR quality decrease in helicopter emergency medical services missions. Registration URL: ; Unique identifier: NCT04138446.

Sandeep Chandra Bollepalli ◽  
Rahul K. Sevakula ◽  
Wan‐Tai M. Au‐Yeung ◽  
Mohamad B. Kassab ◽  
Faisal M. Merchant ◽  

Background Accurate detection of arrhythmic events in the intensive care units (ICU) is of paramount significance in providing timely care. However, traditional ICU monitors generate a high rate of false alarms causing alarm fatigue. In this work, we develop an algorithm to improve life threatening arrhythmia detection in the ICUs using a deep learning approach. Methods and Results This study involves a total of 953 independent life‐threatening arrhythmia alarms generated from the ICU bedside monitors of 410 patients. Specifically, we used the ECG (4 channels), arterial blood pressure, and photoplethysmograph signals to accurately detect the onset and offset of various arrhythmias, without prior knowledge of the alarm type. We used a hybrid convolutional neural network based classifier that fuses traditional handcrafted features with features automatically learned using convolutional neural networks. Further, the proposed architecture remains flexible to be adapted to various arrhythmic conditions as well as multiple physiological signals. Our hybrid‐ convolutional neural network approach achieved superior performance compared with methods which only used convolutional neural network. We evaluated our algorithm using 5‐fold cross‐validation for 5 times and obtained an accuracy of 87.5%±0.5%, and a score of 81%±0.9%. Independent evaluation of our algorithm on the publicly available PhysioNet 2015 Challenge database resulted in overall classification accuracy and score of 93.9% and 84.3%, respectively, indicating its efficacy and generalizability. Conclusions Our method accurately detects multiple arrhythmic conditions. Suitable translation of our algorithm may significantly improve the quality of care in ICUs by reducing the burden of false alarms.

Fouad Chouairi ◽  
Aidan Milner ◽  
Sounok Sen ◽  
Avirup Guha ◽  
James Stewart ◽  

Background Patients with obesity and advanced heart failure face unique challenges on the path to heart transplantation. There are limited data on waitlist and transplantation outcomes in this population. We aimed to evaluate the impact of obesity on heart transplantation outcomes, and to investigate the effects of the new organ procurement and transplantation network allocation system in this population. Methods and Results This cohort study of adult patients listed for heart transplant used the United Network for Organ Sharing database from January 2006 to June 2020. Patients were stratified by body mass index (BMI) (18.5–24.9, 25–29.9, 30–34.9, 35–39.9, and 40–55 kg/m 2 ). Recipient characteristics and donor characteristics were analyzed. Outcomes analyzed included transplantation, waitlist death, and posttransplant death. BMI 18.5 to 24.9 kg/m 2 was used as the reference compared with progressive BMI categories. There were 46 645 patients listed for transplantation. Patients in higher BMI categories were less likely to be transplanted. The lowest likelihood of transplantation was in the highest BMI category, 40 to 55 kg/m 2 (hazard ratio [HR], 0.19 [0.05–0.76]; P =0.02). Patients within the 2 highest BMI categories had higher risk of posttransplantation death (HR, 1.29; P <0.001 and HR, 1.65; P <0.001, respectively). Left ventricular assist devices among patients in obese BMI categories decreased after the allocation system change ( P <0.001, all). After the change, patients with obesity were more likely to undergo transplantation (BMI 30–35 kg/m 2 : HR, 1.31 [1.18–1.46], P <0.001; BMI 35–55 kg/m 2 : HR, 1.29 [1.06–1.58]; P =0.01). Conclusions There was an inverse relationship between BMI and likelihood of heart transplantation. Higher BMI was associated with increased risk of posttransplant mortality. Patients with obesity were more likely to undergo transplantation under the revised allocation system.

Nertila Zylyftari ◽  
Sidsel G. Møller ◽  
Mads Wissenberg ◽  
Frederik Folke ◽  
Carlo A. Barcella ◽  

Background It remains challenging to identify patients at risk of out‐of‐hospital cardiac arrest (OHCA). We aimed to examine health care contacts in patients before OHCA compared with the general population that did not experience an OHCA. Methods and Results Patients with OHCA with a presumed cardiac cause were identified from the Danish Cardiac Arrest Registry (2001–2014) and their health care contacts (general practitioner [GP]/hospital) were examined up to 1 year before OHCA. In a case‐control study (1:9), OHCA contacts were compared with an age‐ and sex‐matched background population. Separately, patients with OHCA were examined by the contact type (GP/hospital/both/no contact) within 2 weeks before OHCA. We included 28 955 patients with OHCA. The weekly percentages of patient contacts with GP the year before OHCA were constant (25%) until 1 week before OHCA when they markedly increased (42%). Weekly percentages of patient contacts with hospitals the year before OHCA gradually increased during the last 6 months (3.5%–6.6%), peaking at the second week (6.8%) before OHCA; mostly attributable to cardiovascular diseases (21%). In comparison, there were fewer weekly contacts among controls with 13% for GP and 2% for hospital contacts ( P <0.001). Within 2 weeks before OHCA, 57.8% of patients with OHCA had a health care contact, and these patients had more contacts with GP (odds ratio [OR], 3.17; 95% CI, 3.09–3.26) and hospital (OR, 2.32; 95% CI, 2.21–2.43) compared with controls. Conclusions The health care contacts of patients with OHCA nearly doubled leading up to the OHCA event, with more than half of patients having health care contacts within 2 weeks before arrest. This could have implications for future preventive strategies.

Lauge Vammen ◽  
Cecilie Munch Johannsen ◽  
Andreas Magnussen ◽  
Amalie Povlsen ◽  
Søren Riis Petersen ◽  

Background Systematic reviews have disclosed a lack of clinically relevant cardiac arrest animal models. The aim of this study was to develop a cardiac arrest model in pigs encompassing relevant cardiac arrest characteristics and clinically relevant post‐resuscitation care. Methods and Results We used 2 methods of myocardial infarction in conjunction with cardiac arrest. One group (n=7) had a continuous coronary occlusion, while another group (n=11) underwent balloon‐deflation during arrest and resuscitation with re‐inflation after return of spontaneous circulation. A sham group was included (n=6). All groups underwent 48 hours of intensive care including 24 hours of targeted temperature management. Pigs underwent invasive hemodynamic monitoring. Left ventricular function was assessed by pressure‐volume measurements. The proportion of pigs with return of spontaneous circulation was 43% in the continuous infarction group and 64% in the deflation‐reinflation group. In the continuous infarction group 29% survived the entire protocol while 55% survived in the deflation‐reinflation group. Both cardiac arrest groups needed vasopressor and inotropic support and pressure‐volume measurements showed cardiac dysfunction. During rewarming, systemic vascular resistance decreased in both cardiac arrest groups. Median [25%;75%] troponin‐I 48 hours after return of spontaneous circulation, was 88 973 ng/L [53 124;99 740] in the continuous infarction group, 19 661 ng/L [10 871;23 209] in the deflation‐reinflation group, and 1973 ng/L [1117;1995] in the sham group. Conclusions This article describes a cardiac arrest pig model with myocardial infarction, targeted temperature management, and clinically relevant post‐cardiac arrest care. We demonstrate 2 methods of inducing myocardial ischemia with cardiac arrest resulting in post‐cardiac arrest organ injury including cardiac dysfunction and cerebral injury.

Lingling Wu ◽  
Bharat Narasimhan ◽  
Kirtipal Bhatia ◽  
Kam S. Ho ◽  
Chayakrit Krittanawong ◽  

Background Despite advances in resuscitation medicine, the burden of in‐hospital cardiac arrest (IHCA) remains substantial. The impact of these advances and changes in resuscitation guidelines on IHCA survival remains poorly defined. To better characterize evolving patient characteristics and temporal trends in the nature and outcomes of IHCA, we undertook a 20‐year analysis of a national database. Methods and Results We analyzed the National Inpatient Sample (1999–2018) using International Classification of Diseases , Ninth Revision and Tenth Revision, Clinical Modification ( ICD‐9‐CM and ICD‐10‐CM ) codes to identify all adult patients suffering IHCA. Subgroup analysis was performed based on the type of cardiac arrest (ie, ventricular tachycardia/ventricular fibrillation or pulseless electrical activity‐asystole). An age‐ and sex‐adjusted model and a multivariable risk‐adjusted model were used to adjust for potential confounders. Over the 20‐year study period, a steady increase in rates of IHCA was observed, predominantly driven by pulseless electrical activity‐asystole arrest. Overall, survival rates increased by over 10% after adjusting for risk factors. In recent years (2014–2018), a similar trend toward improved survival is noted, though this only achieved statistical significance in the pulseless electrical activity‐asystole cohort. Conclusions Though the ideal quality metric in IHCA is meaningful neurological recovery, survival is the first step toward this. As overall IHCA rates rise, overall survival rates are improving in tandem. However, in more recent years, these improvements have plateaued, especially in the realm of ventricular tachycardia/ventricular fibrillation‐related survival. Future work is needed to better identify characteristics of IHCA nonsurvivors to improve resource allocation and health care policy in this area.

Maria Batsis ◽  
Lazaros Kochilas ◽  
Alvin J. Chin ◽  
Michael Kelleman ◽  
Eric Ferguson ◽  

Background For patients with hypoplastic left heart syndrome, digoxin has been associated with reduced interstage mortality after the Norwood operation, but the mechanism of this benefit remains unclear. Preservation of right ventricular (RV) echocardiographic indices has been associated with better outcomes in hypoplastic left heart syndrome. Therefore, we sought to determine whether digoxin use is associated with preservation of the RV indices in the interstage period. Methods and Results We conducted a retrospective cohort study of prospectively collected data using the public use data set from the Pediatric Heart Network Single Ventricle Reconstruction trial, conducted in 15 North American centers between 2005 and 2008. We included all patients who survived the interstage period and had echocardiographic data post‐Norwood and pre‐Glenn operations. We used multivariable linear regression to compare changes in RV parameters, adjusting for relevant covariates. Of 289 patients, 94 received digoxin at discharge post‐Norwood. There were no significant differences in baseline clinical characteristics or post‐Norwood echocardiographic RV indices (RV end‐diastolic volume indexed, RV end‐systolic volume indexed, ejection fraction) in the digoxin versus no‐digoxin groups. At the end of the interstage period and after adjustment for relevant covariates, patients on digoxin had better preserved RV indices compared with those not on digoxin for the ΔRV end‐diastolic volume (11 versus 15 mL, P =0.026) and the ΔRV end‐systolic volume (6 versus 9 mL, P =0.009) with the indexed ΔRV end‐systolic volume (11 versus 20 mL/BSA 1.3 , P =0.034). The change in the RV ejection fraction during the interstage period between the 2 groups did not meet statistical significance (−2 versus −5, P =0.056); however, the trend continued to be favorable for the digoxin group. Conclusions Digoxin use during the interstage period is associated with better preservation of the RV volume and tricuspid valve measurements leading to less adverse remodeling of the single ventricle. These findings suggest a possible mechanism of action explaining digoxin’s survival benefit during the interstage period.

Neil Keshvani ◽  
Benjamin Willis ◽  
David Leonard ◽  
Ang Gao ◽  
Laura DeFina ◽  

Background Data are sparse on the prospective associations between physical activity and incidence of lower extremity peripheral artery disease (PAD). Methods and Results Linking participant data from the CCLS (Cooper Center Longitudinal Study) to Medicare claims files, we studied 19 023 participants with objectively measured midlife cardiorespiratory fitness through maximal effort on the Balke protocol who survived to receive Medicare coverage between 1999 and 2009. The study aimed to determine the association between midlife cardiorespiratory fitness and incident PAD with proportional hazards intensity models, adjusted for age, sex, body mass index, and other covariates, to PAD failure time data. During 121 288 person‐years of Medicare follow‐up, we observed 805 PAD‐related hospitalizations/procedures among 19 023 participants (21% women, median age 50 years). Lower midlife fitness was associated with a higher rate of incident PAD in patients aged 65 years and older (low fit [quintile 1]: 11.4, moderate fit [quintile 2 to 3]: 7.8, and high fit [quintile 4 to 5]: 5.7 per 1000 person years). After multivariable adjustment for common predictors of incident PAD such as age, body mass index, hypertension, and diabetes, these findings persisted. Lower risk for PAD per greater metabolic equivalent task of fitness was observed (hazard ratio [HR], 0.93 [95% CI, 0.90–0.97]; P <0.001). Among a subset of patients with an additional fitness assessment, each 1 metabolic equivalent task increase from baseline fitness was associated with decreased risk of incident PAD (HR, 0.90 [95% CI, 0.82–0.99]; P =0.03). Conclusions Cardiorespiratory fitness in healthy, middle‐aged adults is associated with lower risk of incident PAD in later life, independent of other predictors of incident PAD.

Sign in / Sign up

Export Citation Format

Share Document