scholarly journals The Admit-AF risk score: a clinical risk score for predicting hospital admissions in patients with atrial fibrillation

2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
P Meyre ◽  
S Aeschbacher ◽  
S Blum ◽  
M Coslovsky ◽  
J.H Beer ◽  
...  

Abstract Background Patients with atrial fibrillation (AF) have a high risk of hospital admissions, but there is no validated prediction tool to identify those at highest risk. Purpose To develop and externally validate a risk score for all-cause hospital admissions in patients with AF. Methods We used a prospective cohort of 2387 patients with established AF as derivation cohort. Independent risk factors were selected from a broad range of variables using the least absolute shrinkage and selection operator (LASSO) method fit to a Cox regression model. The developed risk score was externally validated in a separate prospective, multicenter cohort of 1300 AF patients. Results In the derivation cohort, 891 patients (37.3%) were admitted to the hospital over a median follow-up 2.0 years. In the validation cohort, hospital admissions occurred in 719 patients (55.3%) during a median follow-up 1.9 years. The most important predictors for admission were age (75–79 years: adjusted hazard ratio [aHR], 1.33; 95% confidence interval [95% CI], 1.00–1.77; 80–84 years: aHR, 1.51; 95% CI, 1.12–2.03; ≥85 years: aHR, 1.88; 95% CI, 1.35–2.61), prior pulmonary vein isolation (aHR, 0.74; 95% CI, 0.60–0.90), hypertension (aHR, 1.16; 95% CI, 0.99–1.36), diabetes (aHR, 1.38; 95% CI, 1.17–1.62), coronary heart disease (aHR, 1.18; 95% CI, 1.02–1.37), prior stroke/TIA (aHR, 1.28; 95% CI, 1.10–1.50), heart failure (aHR, 1.21; 95% CI, 1.04–1.41), peripheral artery disease (aHR, 1.31; 95% CI, 1.06–1.63), cancer (aHR, 1.33; 95% CI, 1.13–1.57), renal failure (aHR, 1.18, 95% CI, 1.01–1.38), and previous falls (aHR, 1.44; 95% CI, 1.16–1.78). A risk score with these variables was well calibrated, and achieved a C-index of 0.64 in the derivation and 0.59 in the validation cohort. Conclusions Multiple risk factors were associated with hospital admissions in AF patients. This prediction tool selects high-risk patients who may benefit from preventive interventions. The Admit-AF risk score Funding Acknowledgement Type of funding source: Public grant(s) – National budget only. Main funding source(s): The Swiss National Science Foundation (Grant numbers 33CS30_1148474 and 33CS30_177520), the Foundation for Cardiovascular Research Basel and the University of Basel

2018 ◽  
Vol 118 (09) ◽  
pp. 1556-1563 ◽  
Author(s):  
Doron Aronson ◽  
Varda Shalev ◽  
Rachel Katz ◽  
Gabriel Chodick ◽  
Diab Mutlak

Purpose We used a large real-world data from community settings to develop and validate a 10-year risk score for new-onset atrial fibrillation (AF) and calculate its net benefit performance. Methods Multivariable Cox proportional hazards model was used to estimate effects of risk factors in the derivation cohort (n = 96,778) and to derive a risk equation. Measures of calibration and discrimination were calculated in the validation cohort (n = 48,404). Results Cumulative AF incidence rates for both the derivation and validation cohorts were 5.8% at 10 years. The final models included the following variables: age, sex, body mass index, history of treated hypertension, systolic blood pressure ≥ 160 mm Hg, chronic lung disease, history of myocardial infarction, history of peripheral arterial disease, heart failure and history of an inflammatory disease. There was a 27-fold difference (1.0% vs. 27.2%) in AF risk between the lowest (–1) and the highest (9) sum score. The c-statistic was 0.743 (95% confidence interval [CI], 0.737–0.749) for the derivation cohort and 0.749 (95% CI, 0.741–0.759) in the validation cohort. The risk equation was well calibrated, with predicted risks closely matching observed risks. Decision curve analysis displayed consistent positive net benefit of using the AF risk score for decision thresholds between 1 and 25% 10-year AF risk. Conclusion We provide a simple score for the prediction of 10-year risk for AF. The score can be used to select patients at highest risk for treatments of modifiable risk factors, monitoring for sub-clinical AF detection or for clinical trials of primary prevention of AF.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
J.M Leerink ◽  
H.J.H Van Der Pal ◽  
E.A.M Feijen ◽  
P.G Meregalli ◽  
M.S Pourier ◽  
...  

Abstract Background Childhood cancer survivors (CCS) treated with anthracyclines and/or chest-directed radiotherapy receive life-long echocardiographic surveillance to detect cardiomyopathy early. Current risk stratification and surveillance frequency recommendations are based on anthracycline- and chest-directed radiotherapy dose. We assessed the added prognostic value of an initial left ventricular ejection fraction (EF) measurement at >5 years after cancer diagnosis. Patients and methods Echocardiographic follow-up was performed in asymptomatic CCS from the Emma Children's Hospital (derivation; n=299; median time after diagnosis, 16.7 years [inter quartile range (IQR) 11.8–23.15]) and from the Radboud University Medical Center (validation; n=218, median time after diagnosis, 17.0 years [IQR 13.0–21.7]) in the Netherlands. CCS with cardiomyopathy at baseline were excluded (n=16). The endpoint was cardiomyopathy, defined as a clinically significant decreased EF (EF<40%). The predictive value of the initial EF at >5 years after cancer diagnosis was analyzed with multivariable Cox regression models in the derivation cohort and the model was validated in the validation cohort. Results The median follow-up after the initial EF was 10.9 years and 8.9 years in the derivation and validation cohort, respectively, with cardiomyopathy developing in 11/299 (3.7%) and 7/218 (3.2%), respectively. Addition of the initial EF on top of anthracycline and chest radiotherapy dose increased the C-index from 0.75 to 0.85 in the derivation cohort and from 0.71 to 0.92 in the validation cohort (p<0.01). The model was well calibrated at 10-year predicted probabilities up to 5%. An initial EF between 40–49% was associated with a hazard ratio of 6.8 (95% CI 1.8–25) for development of cardiomyopathy during follow-up. For those with a predicted 10-year cardiomyopathy probability <3% (76.9% of the derivation cohort and 74.3% of validation cohort) the negative predictive value was >99% in both cohorts. Conclusion The addition of the initial EF >5 years after cancer diagnosis to anthracycline- and chest-directed radiotherapy dose improves the 10-year cardiomyopathy prediction in CCS. Our validated prediction model identifies low-risk survivors in whom the surveillance frequency may be reduced to every 10 years. Calibration in both cohorts Funding Acknowledgement Type of funding source: Foundation. Main funding source(s): Dutch Heart Foundation


2020 ◽  
Vol 41 (35) ◽  
pp. 3325-3333 ◽  
Author(s):  
Taavi Tillmann ◽  
Kristi Läll ◽  
Oliver Dukes ◽  
Giovanni Veronesi ◽  
Hynek Pikhart ◽  
...  

Abstract Aims Cardiovascular disease (CVD) risk prediction models are used in Western European countries, but less so in Eastern European countries where rates of CVD can be two to four times higher. We recalibrated the SCORE prediction model for three Eastern European countries and evaluated the impact of adding seven behavioural and psychosocial risk factors to the model. Methods and results We developed and validated models using data from the prospective HAPIEE cohort study with 14 598 participants from Russia, Poland, and the Czech Republic (derivation cohort, median follow-up 7.2 years, 338 fatal CVD cases) and Estonian Biobank data with 4632 participants (validation cohort, median follow-up 8.3 years, 91 fatal CVD cases). The first model (recalibrated SCORE) used the same risk factors as in the SCORE model. The second model (HAPIEE SCORE) added education, employment, marital status, depression, body mass index, physical inactivity, and antihypertensive use. Discrimination of the original SCORE model (C-statistic 0.78 in the derivation and 0.83 in the validation cohorts) was improved in recalibrated SCORE (0.82 and 0.85) and HAPIEE SCORE (0.84 and 0.87) models. After dichotomizing risk at the clinically meaningful threshold of 5%, and when comparing the final HAPIEE SCORE model against the original SCORE model, the net reclassification improvement was 0.07 [95% confidence interval (CI) 0.02–0.11] in the derivation cohort and 0.14 (95% CI 0.04–0.25) in the validation cohort. Conclusion Our recalibrated SCORE may be more appropriate than the conventional SCORE for some Eastern European populations. The addition of seven quick, non-invasive, and cheap predictors further improved prediction accuracy.


2020 ◽  
Vol 41 (21) ◽  
pp. 1988-1999 ◽  
Author(s):  
Neal A Chatterjee ◽  
Jani T Tikkanen ◽  
Gopi K Panicker ◽  
Dhiraj Narula ◽  
Daniel C Lee ◽  
...  

Abstract Aims To determine whether the combination of standard electrocardiographic (ECG) markers reflecting domains of arrhythmic risk improves sudden and/or arrhythmic death (SAD) risk stratification in patients with coronary heart disease (CHD). Methods and results The association between ECG markers and SAD was examined in a derivation cohort (PREDETERMINE; N = 5462) with adjustment for clinical risk factors, left ventricular ejection fraction (LVEF), and competing risk. Competing outcome models assessed the differential association of ECG markers with SAD and competing mortality. The predictive value of a derived ECG score was then validated (ARTEMIS; N = 1900). In the derivation cohort, the 5-year cumulative incidence of SAD was 1.5% [95% confidence interval (CI) 1.1–1.9] and 6.2% (95% CI 4.5–8.3) in those with a low- and high-risk ECG score, respectively (P for Δ < 0.001). A high-risk ECG score was more strongly associated with SAD than non-SAD mortality (adjusted hazard ratios = 2.87 vs. 1.38 respectively; P for Δ = 0.003) and the proportion of deaths due to SAD was greater in the high vs. low risk groups (24.9% vs. 16.5%, P for Δ = 0.03). Similar findings were observed in the validation cohort. The addition of ECG markers to a clinical risk factor model inclusive of LVEF improved indices of discrimination and reclassification in both derivation and validation cohorts, including correct reclassification of 28% of patients in the validation cohort [net reclassification improvement 28 (7–49%), P = 0.009]. Conclusion For patients with CHD, an externally validated ECG score enriched for both absolute and proportional SAD risk and significantly improved risk stratification compared to standard clinical risk factors including LVEF. Clinical Trial Registration https://clinicaltrials.gov/ct2/show/NCT01114269. ClinicalTrials.gov ID NCT01114269.


Blood ◽  
2007 ◽  
Vol 110 (11) ◽  
pp. 697-697 ◽  
Author(s):  
Roopen Arya ◽  
Shankaranarayana Paneesha ◽  
Aidan McManus ◽  
Nick Parsons ◽  
Nicholas Scriven ◽  
...  

Abstract Accurate estimation of risk for venous thromboembolism (VTE) may help clinicians assess prophylaxis needs. Only empirical algorithms and risk scores have been described; an empirical risk score (‘Kucher’) based on 8 VTE risk factors (cancer, prior VTE, hypercoagulability, surgery, age>75 yrs, BMI>29, bed rest, hormonal factor) using electronic alerts improved hospitalized patient outcome (NEJM2005;352:969–77). We wished to develop a multivariate regression model for VTE risk, based on Kucher, and validate its performance. The initial derivation cohort consisted of patients enrolled in ‘VERITY’, a multicentre VTE treatment registry for whom the endpoint of VTE and all 8 risk factors were known. Initial univariate analysis (n=5928; 32.4% with diagnosis of VTE) suggested VTE risk was not accounted for by the 8 factors; an additional 3 were added (leg paralysis, smoking, IV drug use [IVD]). The final derivation cohort was 5241 patients (32.0% with VTE) with complete risk data. The validation cohort (n=915) was derived from a database of 928 consecutively enrolled patients at a single DVT clinic. Model parameters were estimated using the statistical package ‘R’ using a stepwise selection procedure to choose the optimal number of main effects and pair-wise interactions. This showed that advanced age (estimated odds ratio [OR]=2.8, p<0.001); inpatient (OR=3.0, p<0.001); surgery (OR=3.1, p<0.001); prior VTE (OR=2.9, p<0.001); leg paralysis (OR=3.8, p<0.001); cancer (OR=5.3, p<0.001); IVD (OR=14.3, p<0.001); smoking (OR=1.2, p=0.009); and thrombophilia (OR=2.8; p<0.001) increased the risk of VTE. Obesity (OR=0.7; p<0.001) increased the VTE risk only in patients with a hormonal factor (OR=2.0, p=0.007). Backward stepwise regression showed prior VTE as the most important factor followed by cancer, IVD, surgery, inpatient, age, leg paralysis, hormonal factor, obesity, thrombophilia and smoking. Expressing the parameter estimates in terms of probabilities defines a risk score model for VTE. Using the model, the receiver operating characteristic (ROC) curve (see figure) area under the curve (AUC) was estimated as 0.720 (95% CI, 0.705–0.735) for the model (dashed line), indicating a good diagnostic test significantly better (p<0.001) than Kucher (AUC=0.617, 95% CI, 0.599–0.634)(solid line). For the validation cohort, AUC was estimated as 0.678 (95% CI, 0.635–0.721) for the model, which was not significantly different from AUC for the full dataset used for model development, and was 0.587 (95% CI, 0.542–0.632) for Kucher. This model to predict individual patient risk of VTE may contribute to decision making regarding prophylaxis in clinical practice. Figure Figure


2019 ◽  
Vol 75 (5) ◽  
pp. 980-986 ◽  
Author(s):  
Ming-Tuen Lam ◽  
Chor-Wing Sing ◽  
Gloria H Y Li ◽  
Annie W C Kung ◽  
Kathryn C B Tan ◽  
...  

Abstract Background To evaluate whether the common risk factors and risk scores (FRAX, QFracture, and Garvan) can predict hip fracture in the oldest old (defined as people aged 80 and older) and to develop an oldest-old-specific 10-year hip fracture prediction risk algorithm. Methods Subjects aged 80 years and older without history of hip fracture were studied. For the derivation cohort (N = 251, mean age = 83), participants were enrolled with a median follow-up time of 8.9 years. For the validation cohort (N = 599, mean age = 85), outpatients were enrolled with a median follow-up of 2.6 years. A five-factor risk score (the Hong Kong Osteoporosis Study [HKOS] score) for incident hip fracture was derived and validated, and its predictive accuracy was evaluated and compared with other risk scores. Results In the derivation cohort, the C-statistics were .65, .61, .65, .76, and .78 for FRAX with bone mineral density (BMD), FRAX without BMD, QFracture, Garvan, and the HKOS score, respectively. The category-less net reclassification index and integrated discrimination improvement of the HKOS score showed a better reclassification of hip fracture than FRAX and QFracture (all p &lt; .001) but not Garvan, while Garvan, but not HKOS score, showed a significant over-estimation in fracture risk (Hosmer–Lemeshow test p &lt; .001). In the validation cohort, the HKOS score had a C-statistic of .81 and a considerable agreement between expected and observed fracture risk in calibration. Conclusion The HKOS score can predict 10-year incident hip fracture among the oldest old in Hong Kong. The score may be useful in identifying the oldest old patients at risk of hip fracture in both community-dwelling and hospital settings.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
P Apenteng ◽  
D.A Fitzmaurice ◽  
S Virdone ◽  
A.J Camm ◽  
K.A.A Fox ◽  
...  

Abstract Introduction Atrial fibrillation (AF) remains a common cause of stroke and anticoagulation (AC) treatment reduces the risk of stroke. Reasons for patients with AF not receiving anticoagulation are generally attributed to the clinician decision, however in reality a proportion of patients refuse anticoagulation. The aim of our study was to investigate the clinical outcomes of patients with AF who refused anticoagulation. Methods The Global Anticoagulant Registry in the FIELD (GARFIELD-AF) was an international prospective observational study of patients ≥18 years with newly diagnosed AF and ≥1 investigator determined risk factor for stroke. We analysed two-year outcomes (unadjusted) of non-haemorrhagic stroke/systemic embolism (stroke/SE), major bleeding and all-cause mortality in patients at high risk of stroke (men with CHA2DS2VASc≥2 and women with CHA2DS2VASc≥3) who did not received anticoagulation due to patient refusal, patients at high risk of stroke who received anticoagulation, and patients who were not on anticoagulation due to reasons other than patient refusal. Results Out of 43,154 patients, 13,283 (30.8%) are at the higher risk of stroke and did not received anticoagulation at baseline. The reason for not receiving anticoagulation was unavailable for 38.7% (5146/13283); of the patients with a known reason for not receiving anticoagulation, 12.5% (1014/8137) refused anticoagulation. Overall the study participants had a mean (SD) age of 72.2 (9.9) years and 50% were female. The median (Q1; Q3) CHA2DS2VASc score was 3.0 (3.0; 5.0) in patients who refused anticoagulation and 4.0 (3.0; 4.0) in patients who received anticoagulation. The median (Q1; Q3) HAS-BLED score was 1.0 (1.0; 2.0) in both groups. Of the patients who received anticoagulants, 59.7% received VKA and 40.3% received non-VKA oral anticoagulants. 79.4% of patients who refused anticoagulation were on antiplatelets. At two-year follow up the rate of events per 100 person-years (AC refused vs AC received) were: stroke/SE 1.42 vs 0.95 (p=0.04), major bleeding 0.62 vs 1.20 (p=0.02), and all-cause mortality 2.28 vs 3.90 (p=0.0004) (Figure). The event rates in patients who were not on anticoagulation for reasons other than patient refusal were stroke/SE 1.56, major bleeding 0.91, and all-cause mortality 5.49. Conclusion In this global real-world prospective study of patients with newly diagnosed AF, patients who refused anticoagulation had a higher rate of stroke/SE but lower rates of all-cause mortality and major bleeding than patients who received anticoagulation. While patient refusal of anticoagulation is an acceptable outcome of shared decision-making, clinically it is a missed opportunity to prevent AF related stroke. Patients' beliefs about AF related stroke and anticoagulation need to be explored. The difference in all-cause mortality warrants further investigation; further analysis will include adjusted results. Event rates at two years of follow-up Funding Acknowledgement Type of funding source: Private grant(s) and/or Sponsorship. Main funding source(s): The GARFIELD-AF registry is funded by an unrestricted research grant from Bayer AG.


Author(s):  
Tze‐Fan Chao ◽  
Chern‐En Chiang ◽  
Tzeng‐Ji Chen ◽  
Jo‐Nan Liao ◽  
Ta‐Chuan Tuan ◽  
...  

Background Although several risk schemes have been proposed to predict new‐onset atrial fibrillation (AF), clinical prediction models specific for Asian patients were limited. In the present study, we aimed to develop a clinical risk score (Taiwan AF score) for AF prediction using the whole Taiwan population database with a long‐term follow‐up. Methods and Results Among 7 220 654 individuals aged ≥40 years without a past history of cardiac arrhythmia identified from the Taiwan Health Insurance Research Database, 438 930 incident AFs occurred after a 16‐year follow‐up. Clinical risk factors of AF were identified using Cox regression analysis and then combined into a clinical risk score (Taiwan AF score). The Taiwan AF score included age, male sex, and important comorbidities (hypertension, heart failure, coronary artery disease, end‐stage renal disease, and alcoholism) and ranged from −2 to 15. The area under the receiver operating characteristic curve of the Taiwan AF scores in the predictions of AF are 0.857 for the 1‐year follow‐up, 0.825 for the 5‐year follow‐up, 0.797 for the 10‐year follow‐up, and 0.756 for the 16‐year follow‐up. The annual risks of incident AF were 0.21%/year, 1.31%/year, and 3.37%/year for the low‐risk (score −2 to 3), intermediate‐risk (score 4 to 9), and high‐risk (score ≥10) groups, respectively. Compared with low‐risk patients, the hazard ratios of incident AF were 5.78 (95% CI, 3.76–7.75) for the intermediate‐risk group and 8.94 (95% CI, 6.47–10.80) for the high‐risk group. Conclusions We developed a clinical AF prediction model, the Taiwan AF score, among a large‐scale Asian cohort. The new score could help physicians to identify Asian patients at high risk of AF in whom more aggressive and frequent detections and screenings may be considered.


Blood ◽  
2013 ◽  
Vol 122 (21) ◽  
pp. 1323-1323
Author(s):  
Anna Hecht ◽  
Florian Nolte ◽  
Daniel Nowak ◽  
Verena Nowak ◽  
Benjamin Hanfstein ◽  
...  

Abstract Introduction With current therapy regimens over 75% of patients with de novo acute promyelocytic leukemia (APL) can be cured. Approaches to further improve patient outcome by stratifying patients at the time of initial diagnosis according to their individual risk and to adjust therapy accordingly have been based on clinical features only. Molecular markers have not been established for risk stratification as yet. Recently, we have shown that high expression levels of the genes brain and acute leukemia, cytoplasmic (BAALC) and ets related gene (ERG) are associated with inferior outcome in APL patients. In addition, data indicate that aberrant expression of the gene Wilms’ tumor 1 (WT1) is a negative prognostic factor with regard to overall survival (OS) after complete remission (CR) and relapse free survival (RFS) in APL. In this study we evaluated the prognostic relevance of a combined score integrating the expression levels of the above mentioned genes to further improve risk stratification in APL patients. Methods Expression levels of BAALC, ERG and WT1 of 62 patients with newly diagnosed APL were retrospectively analyzed in bone marrow mononuclear cells using multiplex reverse transcriptase quantitative real-time PCR (qRT-PCR). Median age of patients was 47 years (range: 19 to 82y). All patients gave informed consent. Patients were diagnosed and treated in the German AML Cooperative Group (AMLCG) study with a treatment of simultaneous ATRA and double induction chemotherapy including high-dose ara-C, consolidation and maintenance chemotherapy. The following gene expression levels were identified as negative risk factors in preceding studies: BAALC expression ≥25th percentile (BAALChigh), ERG expression >75th percentile (ERGhigh) and WT1 expression ≤25th percentile or ≥75th percentile (WT1low/high). A risk score was developed as follows: for the presence of one of the mentioned risk factors one scoring point was assigned to a respective patient, i.e. a maximum of 3 points (one point for BAALChigh, ERGhigh and WT1low/high, respectively) and a minimum of 0 points (i.e. presenting with none of the aforementioned risk factors) could be allocated to one patient. Accordingly, patients were divided into four risk groups: 7 patients scored 0 points (= low risk), 27 patients scored 1 point (= intermediate 1 risk), 19 patients scored 2 points (= intermediate 2 risk) and 9 patients scored 3 points (= high risk). Subsequently, OS, RFS and relapse free interval (RFI) were calculated using the Kaplan-Meier method and a log-rank test was used to compare differences between the four risk groups (p<0.05). Results The integrative risk score divided patients into four groups with significantly different outcome. The low risk group showed a RFS of 100% at 10 years of follow-up compared to the intermediate 1 risk group with 81%, the intermediate 2 risk group with 58% and the high risk group with a RFS of 42% only (median survival: 4.6y) (p=0.02). In accordance, the RFI differed significantly between the four groups: low risk 100%, intermediate 1 risk 100%, intermediate 2 risk 89% and high risk 71% (p=0.049). There was no statistically significant difference between the 4 groups with regard to OS in the entire patient cohort. However, there was a clear trend towards a difference in OS in patients who achieved a CR after induction therapy: low risk 100%, intermediate 1 risk 81%, intermediate 2 risk 68% and high risk 53% survival at 10 years of follow-up (p=0.09). Conclusion Integration of expression levels of the genes BAALC, ERG and WT1 into a scoring system identifies 4 risk groups with significantly different outcome with regard to RFS and RFI. It might be a promising approach to guide therapeutic decisions in patients with APL. However, multivariate analyses and validation of these data in an independent patient cohort is warranted. Disclosures: No relevant conflicts of interest to declare.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
A Von Ende ◽  
B Casadei ◽  
J.C Hopewell

Abstract Background Previous studies have suggested only modest benefits of adding genetic information to conventional risk factors for prediction of atrial fibrillation (AF). However, these studies have been based on limited numbers of AF cases and pre-date recent AF genetic discoveries. Purpose To examine the independent relevance of common genetic risk factors over and above established non-genetic risk factors for predicting AF amongst 270,000 participants from UK Biobank, and to determine potential clinical utility. Methods UK Biobank (UKB) is a large prospective study of over 500,000 British individuals aged 40 to 69 years at recruitment. Incident AF was ascertained using hospital episode statistics and death registry data. The CHARGE-AF score, which combines the relevance of age, height, weight, blood pressure, use of antihypertensives, diabetes, heart failure, and myocardial infarction (MI) was used to estimate 5-year risk of AF at baseline. A polygenic risk score (PRS) was constructed based on 142 independent variants previously associated with AF in a genome-wide meta-analysis of 60,620 AF cases from the AFGen Consortium, weighted by their published effect sizes. A total of 270,254 individuals were analysed after exclusions for genetic QC, non-White British ancestry, and prevalent AF. Cox proportional hazard models were used to estimate associations between risk scores (based on standard deviation [SD] units) and incident AF. Standard methods were used to assess predictive value. Results During a median follow-up of 8.1 years, 12,407 incident AF cases were identified. The CHARGE-AF risk score strongly predicted incident AF in UK Biobank, and was associated with a ∼3-fold higher risk of AF per SD (Hazard ratio [HR]=2.88; 95% CI: 2.82–2.94). The PRS was associated with a 54% higher risk of AF per SD (HR=1.54; 95% CI: 1.51–1.57). The independent impact of the PRS, after adjusting for the CHARGE-AF score, was unchanged and remained strongly predictive (HR=1.57, 95% CI: 1.54–1.60), with participants in the upper tertile of the PRS having more than a 2.5-fold higher risk (HR=2.59, 95% CI: 2.47–2.71) when compared with those in the lower tertile. The addition of the PRS improved the C-statistic from 0.758 (CHARGE-AF alone) to 0.783 (Δ=0.025) and correctly reclassified 8.7% of cases and 2.6% of controls at 5 years. Both non-genetic and genetic risk scores were well-calibrated in the UK Biobank participants, and sensitivity of the results to alternative PRS selection approaches and age at risk were also examined. Conclusion In a large prospective cohort, genetic determinants of AF were independent of conventional risk factors and significantly improved prediction over a well-validated clinical risk algorithm. This illustrates the potential added benefit of genetic information to identify higher-risk individuals who may benefit from earlier monitoring and personalised risk management strategies. Funding Acknowledgement Type of funding source: Foundation. Main funding source(s): British Heart Foundation


Sign in / Sign up

Export Citation Format

Share Document