Leukocytosis and Risk Stratification Assessment in Essential Thrombocythemia.

Blood ◽  
2007 ◽  
Vol 110 (11) ◽  
pp. 681-681
Author(s):  
Alessandra Carobbio ◽  
Elisabetta Antonioli ◽  
Alessandro M. Vannucchi ◽  
Federica Delaini ◽  
Vittoria Guerini ◽  
...  

Abstract Established risk factors for thrombosis in Essential Thrombocythemia (ET) including age and previous vascular events have been incorporated into algorithms for risk assessment in clinical trials. Our interest is now to refine this risk stratification by adding, to this established predictive model, leukocytosis, found to be a new risk factor for these events. For this purpose we used the C statistic estimate that defines, from a Cox multivariable model, the probability of concordance between leukocytosis and events among comparable patients during the time. C statistic values range from 0.5 (usefulness of the test) to 1 (perfect discriminatory test) and allow to evaluate the specificity and sensitivity of the test (leukocytosis) in analogy of the area under the curve (AUC) used for assessing the accuracy of the diagnostic tests. This analysis provides an assessment of the incremental role of leukocytosis, in addition to conventional risk factors, for discriminating ET patients with or without thrombosis. Finally, by a receiver operating characteristic (ROC) curve, we looked for the best cut-off of leukocytes to stratify patients into risk categories. During follow-up (median 4.5 years) 657 ET (PVSG and WHO diagnostic criteria) patients, 212 males, 445 females, median age 52 years (range 8–93), seen in two academic Italian institutions, had 72 major thrombotic events (28 venous, 44 arterial). Cox proportional hazard model was performed to analyse the thrombotic risk, adjusted for the following baseline variables: centre, sex, standard risk factors (age ≥ 60 years and/or prior thrombosis), hemoglobin, hematocrit, platelet, leukocytes and JAK2V617F allele burden. Results confirmed that age, prior events and leukocytosis are independent risk factors for thrombosis. Interestingly, a gradient between white blood cell (WBC) number and venous and arterial events was documented [Reference category: WBC <7.2 x109/L, WBC 7.2 − 8.7x109/L: RR=2.4, WBC 8.7 − 10.4x109/L: RR=2.7, WBC >10.4 x109/L: RR=3.0, all p-values <0.05]. On the contrary, no significant (p>0.05) association was found either for JAK2V617F allele burden [Reference category: JAK2V617F 0%, JAK2V617F 1–25%: RR=1.2, JAK2V617F 26–50%: RR=1.5, JAK2V617F >50%: RR=1.8] and for the other laboratory parameters. No centre-confounding effect was found. C statistics were then calculated on two Cox models for the prediction of major thrombosis in the follow-up of individual patients. The first model, including age over 60 years and/or prior thrombosis, showed a C statistic of 0.63. In the second one, by adding leukocytes at diagnosis, the C statistic was significantly improved (0.67). The best leukocyte cut-off values for predicting the events (ROC curve) ranged from 9.0 to 9.5 (x109/L), which corresponded to the highest sensitivity and specificity rates. In conclusion, we confirmed in this large retrospective cohort of ET patients that leukocytosis is an independent risk-factor for thrombosis. Moreover, we demonstrated by C statistic that leukocytosis has an incremental prognostic role in addition to conventional risk factors and found the best leukocytes cut-off able to discriminate between the group that will have thrombosis and the group that will not. These findings constitute a solid background to stratify patients in future clinical trials in ET.

2021 ◽  
Vol 42 (Supplement_1) ◽  
Author(s):  
K Kearney ◽  
J Anderson ◽  
R Cordina ◽  
M Lavender ◽  
D Celermajer ◽  
...  

Abstract Background Contemporary registries have documented a change in the epidemiology of PAH patients displaying increasing co-morbidities associated with left heart disease (LHD). These patients are often excluded from randomized clinical trials. It is unclear whether the presence of LHD comorbidities may adversely impact the accuracy of risk stratification and response to PAH therapy. Method Data was extracted from the Pulmonary Hypertension Society of Australia and New Zealand registry for incident patients with a diagnosis with idiopathic/heritable/toxin-induced (I/H/D)-PAH and connective tissue disease (CTD) associated PAH from 2011 - 2020. Patients without available medication and follow up data were excluded. We used the AMBITION trial exclusion criteria to define the subpopulation with LHD risk factors and haemodynamic phenotype (PAH-rLHD). Results 489 patients (I/H/D-PAH=251, CTD-PAH=238) were included in our analysis, with 103 (21.0%) fulfilling the definition of PAH-rLHD (34 had ≥3 risk factors for left heart disease (rLHD-hypertension, diabetes, obesity or ischaemic heart disease) and 76 had borderline haemodynamics (mean capillary wedge pressure 13–15 with pulmonary vascular resistance &lt;500 dynes sec/cm5) including 7 who met both criteria). Compared to classical PAH, patients with PAH-rLHD were older at diagnosis (66±13 vs 58±19, p&lt;0.001), had lower pulmonary vascular resistance (PVR: 393±266 vs 708±391, p=0.031) but worse exercise capacity (6MWD: 286±130m vs 327±136m, p=0.005). PAH-rLHD were more likely to be started on initial monotherapy, compared with “classical” PAH (73% vs 56%, p=0.002). In the monotherapy groups, endothelin receptor antagonists (ERA) were used in 73% PAH-rLHD, compared with 66% in classical PAH group. Both groups exhibited similar response to both mono- and combination therapy with commensurate improvements in WHO functional class (mean change 0.3±0.6 vs 0.3±0.8, p=0.443) and 6-minute walk distance (mean change 44±82 vs 48±101, p=0.723). There was no difference in survival between classical PAH and PAH-rLHD (log rank, p=0.29). The REVEAL 2.0 risk score effectively discriminated risk in both populations at baseline and first follow up (classical PAH: baseline C statistic 0.750, follow up 0.774 and PAH-rLHD: baseline C statistic 0.756, follow up 0.791). Conclusion Despite lower PVR at diagnosis, PAH-rLHD patients and “classical” PAH demonstrate similar response to first-line therapy with similar long term survival. The REVEAL 2.0 risk score can be effectively applied to the subpopulation of PAH-rLHD in real life clinical practice. FUNDunding Acknowledgement Type of funding sources: None.


2008 ◽  
Vol 26 (16) ◽  
pp. 2732-2736 ◽  
Author(s):  
Alessandra Carobbio ◽  
Elisabetta Antonioli ◽  
Paola Guglielmelli ◽  
Alessandro M. Vannucchi ◽  
Federica Delaini ◽  
...  

Purpose Established risk factors for thrombosis in essential thrombocythemia (ET) include age and previous vascular events. We aimed to refine this risk stratification by adding baseline leukocytosis. Patients and Methods We enrolled 657 patients with ET followed for a median of 4.5 years who developed 72 major thrombosis. Cox proportional hazard model was performed to analyze the thrombotic risk and to discriminate ET patients with or without thrombosis, multivariable C statistic index was used. We searched for leukocytes cutoff with the best sensitivity and specificity by a receiver operating characteristic curve. Results Results confirmed that age and prior events are independent risk factors for thrombosis and showed a gradient between baseline leukocytosis and thrombosis. On the contrary, no significant association was found either for JAK2V617F allele burden and for other laboratory parameters, including platelet number. In the model with conventional risk factors alone, C statistic ratio for total thrombosis was 0.63 and when leukocytosis was added, the change was small (C = 0.67). In contrast, in younger and asymptomatic patients (low-risk category), C statistic value indicated an high risk for thrombosis in patients with leukocytosis, similar to that calculated in conventionally defined high-risk group (C = 0.65). The best leukocyte cutoff values for predicting the events was found to be 9.4 (× 109/L). Conclusion We suggest to include baseline leukocytosis in the risk stratification of ET patients enrolled in clinical trials.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
J Caro Codon ◽  
T Lopez-Fernandez ◽  
C Alvarez-Ortega ◽  
P Zamora Aunon ◽  
I Rodriguez Rodriguez ◽  
...  

Abstract Background The actual usefulness of CV risk factor assessment in the prognostic evaluation of cancer patients treated with cardiotoxic treatment remains largely unknown. Design Prospective multicenter study in patients scheduled to receive anticancer therapy related with moderate/high cardiotoxic risk. Methods A total of 1324 patients underwent follow-up in a dedicated cardio-oncology clinic from April 2012 to October 2017. Special care was given to the identification and control of CV risk factors. Clinical data, blood samples and echocardiographic parameters were prospectively collected according to protocol, at baseline before cancer therapy and then at 3 weeks, 3 months, 6 months, 1 year, 1.5 years and 2 years after initiation of cancer therapy. Results At baseline, 893 patients (67.4%) presented at least 1 risk factor, with a significant number of patients newly diagnosed during follow-up. Individual risk factors were not related with worse prognosis during a 2-year follow-up. However, a higher Systemic Coronary Risk Estimation (SCORE) was significantly associated with higher rates of severe cardiotoxicity and all-cause mortality [HR 1.79 (95% CI 1.16–2.76) for SCORE 5–9 and HR 4.90 (95% CI 2.44–9.82) for SCORE ≥10 when compared with patients with lower SCORE (0–4)]. Conclusions This large cohort of patients treated with a potentially cardiotoxic regimen showed a significant prevalence of CV risk factors at baseline and significant incidence during follow-up. Baseline cardiovascular risk assessment using SCORE predicted severe cardiotoxicity and all-cause mortality. Therefore, its use should be recommended in the evaluation of cancer patients. Funding Acknowledgement Type of funding source: Public grant(s) – National budget only. Main funding source(s): This study was partially funded by the Fondo Investigaciones Sanitarias (Spain), Centro de Investigaciόn Biomédica en Red Cardiovascular CIBER-CV (Spain)


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
H Wienbergen ◽  
A Fach ◽  
S Meyer ◽  
J Schmucker ◽  
R Osteresch ◽  
...  

Abstract Background The effects of an intensive prevention program (IPP) for 12 months following 3-week rehabilitation after myocardial infarction (MI) have been proven by the randomized IPP trial. The present study investigates if the effects of IPP persist one year after termination of the program and if a reintervention after &gt;24 months (“prevention boost”) is effective. Methods In the IPP trial patients were recruited during hospitalization for acute MI and randomly assigned to IPP versus usual care (UC) one month after discharge (after 3-week rehabilitation). IPP was coordinated by non-physician prevention assistants and included intensive group education sessions, telephone calls, telemetric and clinical control of risk factors. Primary study endpoint was the IPP Prevention Score, a sum score evaluating six major risk factors. The score ranges from 0 to 15 points, with a score of 15 points indicating best risk factor control. In the present study the effects of IPP were investigated after 24 months – one year after termination of the program. Thereafter, patients of the IPP study arm with at least one insufficiently controlled risk factor were randomly assigned to a 2-months reintervention (“prevention boost”) vs. no reintervention. Results At long-term follow-up after 24 months, 129 patients of the IPP study arm were compared to 136 patients of the UC study arm. IPP was associated with a significantly better risk factor control compared to UC at 24 months (IPP Prevention Score 10.9±2.3 points in the IPP group vs. 9.4±2.3 points in the UC group, p&lt;0.01). However, in the IPP group a decrease of risk factor control was observed at the 24-months visit compared to the 12-months visit at the end of the prevention program (IPP Prevention Score 10.9±2.3 points at 24 months vs. 11.6±2.2 points at 12 months, p&lt;0.05, Figure 1). A 2-months reintervention (“prevention boost”) was effective to improve risk factor control during long-term course: IPP Prevention Score increased from 10.5±2.1 points to 10.7±1.9 points in the reintervention group, while it decreased from 10.5±2.1 points to 9.7±2.1 points in the group without reintervention (p&lt;0.05 between the groups, Figure 1). Conclusions IPP was associated with a better risk factor control compared to UC during 24 months; however, a deterioration of risk factors after termination of IPP suggests that even a 12-months prevention program is not long enough. The effects of a short reintervention after &gt;24 months (“prevention boost”) indicate the need for prevention concepts that are based on repetitive personal contacts during long-term course after coronary events. Figure 1 Funding Acknowledgement Type of funding source: Foundation. Main funding source(s): Stiftung Bremer Herzen (Bremen Heart Foundation)


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
A Malik ◽  
H Chen ◽  
A Cooper ◽  
M Gomes ◽  
V Hejjaji ◽  
...  

Abstract Background In patients with type 2 diabetes (T2D), optimal management of cardiovascular (CV) risk factors is critical for primary prevention of CV disease. Purpose To describe the association of country income and patient socioeconomic factors with risk factor control in patients with T2D. Methods DISCOVER is a 37-country, prospective, observational study of 15,983 patients with T2D enrolled between January 2016 and December 2018 at initiation of 2nd-line glucose-lowering therapy and followed for 3 years. In patients without known CV disease with sub-optimally controlled risk factors at baseline, we examined achievement of risk factor control (HbA1c &lt;7%, BP &lt;140/90 mmHg, appropriate statin) at the 3 year follow-up. Countries were stratified by gross national income (GNI)/capita, per World Bank report. We explored variability across countries in risk factor control achievement using hierarchical logistic regression models and examined the association of country- and patient-level economic factors with risk factor control. Results Among 9,613 patients with T2D but without CV disease (mean age 57.2 years, 47.9% women), 83.1%, 37.5%, and 66.3% did not have optimal control of glucose, BP, and statins, respectively, at baseline. Of these, 40.8%, 55.5%, and 28.6% achieved optimal control at 3 years of follow-up. There was substantial variability in achievement of risk factor control across countries (Figure) but no association of country GNI/capita on achievement of risk factor control (Table). Insurance status, which differed substantially by GNI group, was strongly associated with glycemic control, with no insurance and public insurance associated with lower odds of patients achieving HbA1c &lt;7%. Conclusions In a global cohort of patients with T2D, a substantial proportion do not achieve risk factor control even after 3 years of follow-up. The variability across countries in risk factor control is not explained by the GNI/capita of the country. Proportion of patients at goal Funding Acknowledgement Type of funding source: Private company. Main funding source(s): The DISCOVER study is funded by AstraZeneca


Circulation ◽  
2020 ◽  
Vol 142 (Suppl_3) ◽  
Author(s):  
Lars Grosse-Wortmann ◽  
Laurine van der Wal ◽  
Aswathy Vaikom House ◽  
Lee Benson ◽  
Raymond Chan

Introduction: Cardiovascular magnetic resonance (CMR) with late gadolinium enhancement (LGE) has been shown to be an independent predictor of sudden cardiac death (SCD) in adults with hypertrophic cardiomyopathy (HCM). The clinical significance of LGE in pediatric HCM patients is unknown. Hypothesis: LGE improves the SCD risk prediction in children with HCM. Methods: We retrospectively analyzed the CMR images and reviewed the outcomes pediatric HCM patients. Results: Amongst the 720 patients from 30 centers, 73% were male, with a mean age of 14.2±4.8 years. During a mean follow up of 2.6±2.7 years (range 0-14.8 years), 34 experienced an episode of SCD or equivalent. LGE (Figure 1A) was present in 34%, with a mean burden of 14±21g, or 2.5±8.2g/m2 (6.2±7.7% of LV myocardium). The presence of ≥1 adult traditional risk factor (family history of SCD, syncope, LV thickness >30mm, non-sustained ventricular tachycardia on Holter) was associated with an increased risk of SCD (HR=4.6, p<0.0001). The HCM Risk-Kids score predicted SCD (p=0.002). The presence of LGE was strongly associated with an increased risk (HR=3.8, p=0.0003), even after adjusting for traditional risk factors (HR adj =3.2, p=0.003) or the HCM Risk-Kids score (HR adj =3.5, p=0.003). Furthermore, the burden of LGE was associated with increased risk (HR=2.1/10% LGE, p<0.0001). LGE burden remained independently associated with an increased risk for SCD after adjusting for traditional risk factors (HRadj=1.5/10% LGE, p=0.04) or HCM Risk-Kids (HRadj=1.9/10% LGE, p=0.0018, Figure 1B). The addition of LGE burden improved the predictive model using traditional risk markers (C statistic 0.67 vs 0.77, p=0.003) and HCM Risk-Kids (C statistic 0.68 vs 0.74, p=0.045). Conclusions: Quantitative LGE is an independent risk factor for SCD in pediatric patients with HCM and improves the performance of traditional risk markers and the HCM Risk-Kids Score for SCD risk stratification in this population.


Author(s):  
Clara García-Carro ◽  
Mónica Bolufer ◽  
Roxana Bury ◽  
Zaira Catañeda ◽  
Eva Muñoz ◽  
...  

Abstract Background Checkpoint inhibitors (CPI) have drastically improved metastatic cancer outcomes. However, immunotherapy is associated to multiple toxicities, including acute renal injury (AKI). Data about CPI related AKI are limited. Our aim was to determine risk factors for CPI related AKI, as well as its clinical characteristics and its impact on mortality in patients undergoing immunotherapy. Methods All patients under CPI at our center between March 2018 and May 2019, and with a follow up until April 2020, were included. Demographical, clinical data and laboratory results were collected. AKI was defined according to KDIGO guidelines. We performed a logistic regression model to identify independent risk factors for AKI and actuarial survival analysis to establish risk factors for mortality in this population. Results 759 patients were included, with a median age of 64 years. 59% were men and baseline median creatinine was 0.80 mg/dL. Most frequent malignance was lung cancer and 56% were receiving anti-PD1. 15.5% developed AKI during the follow-up. Age and baseline kidney function were identified as independent risk factors for AKI related ICI. At the end of follow-up, 52.3% patients had died. Type of cancer (not melanoma, lung or urogenital malignance), type of CPI (not CTLA4, PD-1, PD-L1 or their combination) and the presence of an episode of AKI were identified as risk factors for mortality. Conclusions 15.5% of patients under immunotherapy presented AKI. A single AKI episode was identified as an independent risk factor for mortality in these patients and age and baseline renal function were risks factors for the development of AKI.


Stroke ◽  
2012 ◽  
Vol 43 (suppl_1) ◽  
Author(s):  
Tanya N Turan ◽  
Azhar Nizam ◽  
Michael J Lynn ◽  
Colin P Derdeyn ◽  
David Fiorella ◽  
...  

Purpose: SAMMPRIS is the first stroke prevention trial to include protocol-driven aggressive management of multiple vascular risk factors. We sought to determine the impact of this protocol on early risk factor control in the trial. Materials and Methods: SAMMPRIS randomized 451 patients with symptomatic 70%-99% intracranial stenosis to aggressive medical management or stenting plus aggressive medical management at 50 USA sites. For the primary risk factor targets (SBP < 140 mm/Hg (<130 if diabetic) and LDL < 70 mg/dL), the study neurologists follow medication titration algorithms and risk factor medications are provided to the patients. Secondary risk factors (diabetes, non-HDL, weight, exercise, and smoking cessation) are managed with assistance from the patient’s primary care physician and a lifestyle modification program (provided). Sites receive patient-specific recommendations and feedback to improve performance. Follow-up continues, but the 30-day data are final. We compared baseline to 30-day risk factor measures using paired t-tests for means and McNemar tests for percentages. Results: The differences in risk factor measures between baseline and 30 days are shown in Table 1. Conclusions: The SAMMPRIS protocol resulted in major improvements in controlling most risk factors within 30 days of enrollment, which may have contributed to the lower than expected 30 day stroke rate in the medical group (5.8%). However, the durability of this approach over time will be determined by additional follow-up.


Stroke ◽  
2012 ◽  
Vol 43 (suppl_1) ◽  
Author(s):  
Audrey L Austin ◽  
Michael G Crowe ◽  
Martha R Crowther ◽  
Virginia J Howard ◽  
Abraham J Letter ◽  
...  

Background and Purpose: Research suggests that depression may contribute to stroke risk independent of other known risk factors. Most studies examining the impact of depression on stroke have been conducted with predominantly white cohorts, though blacks are known to have higher stroke incidence than whites. The purpose of this study was to examine depressive symptoms as a risk factor for incident stroke in blacks and whites, and determine whether depressive symptomatology was differentially predictive of stroke among blacks and whites. Methods: The REasons for Geographic and Racial Differences in Stroke (REGARDS), is a national, population-based longitudinal study designed to examine risk factors associated with black-white and regional disparities in stroke incidence. Among 30,239 participants (42% black) accrued from 2003-2007, excluding those lacking follow-up or data on depressive symptoms, 27,557 were stroke-free at baseline. As of the January 2011 data closure, over an average follow-up of 4.6 years, 548 incident stroke cases were verified by study physicians based on medical records review. The association between baseline depressive symptoms (assessed via the Center for Epidemiological Studies Depression scale, 4-item version) and incident stroke was analyzed with Cox proportional hazards models adjusted for demographic factors (age, race, and sex), stroke risk factors (hypertension, diabetes, smoking, atrial fibrillation, and history of heart disease), and social factors (education, income, and social network). Results: For the total sample, depressive symptoms were predictive of incident stroke. The association between depressive symptoms and stroke did not differ significantly based on race (Wald X 2 = 2.38, p = .1229). However, race-stratified analyses indicated that the association between depressive symptoms and stroke was stronger among whites and non-significant among blacks. Conclusions: Depressive symptoms were an independent risk factor for incident stroke among a national sample of blacks and whites. These findings suggest that assessment of depressive symptoms may warrant inclusion in stroke risk scales. The potential for a stronger association in whites than blacks requires further study.


2020 ◽  
Author(s):  
Guang Fu ◽  
Xi-si He ◽  
Hao-li Li ◽  
Hai-chao Zhan ◽  
Jun-fu Lu ◽  
...  

Abstract Background Complication of disseminated intravascular coagulation (DIC) is a determinant of the prognosis in patients with sepsis shock. Procalcitonin (PCT) has been advocated as a marker of bacterial sepsis. The purpose of this study was to evaluate the relationship between serum PCT levels and DIC with sepsis shock Methods A cohort study was designed which included patients that admitted in intensive care unit (ICU) between January 1, 2015 and December 31, 2018 and the follow-up to discharge. 164 septic shock patients were divided into DIC and non-DIC groups according to international society of thrombosis and homeostasis (ISTH). PCT was measured at the admission to ICU, and all the participants received routine biochemical coagulation test subsequently. Results PCT levels were considerably higher in septic shock patients who developed DIC than those who did not (54.6[13.6–200]vs12.6[2.4–53.3]ng/ml), respectively, P < 0.001). Multivariable logistic regression model revealed that PCT level was significantly associated with risk of DIC independent of conventional risk factors. In addition, curve fitting showed a linear relationship between PCT and DIC score. The Receiver Operating characteristic(ROC) curve suggested that the optimal cut-off point for PCT to predicting DIC induced by septic shock was 42.0 ng/ml, and the area under the curve (AUC) was 0.701(95% CI [0.619–0.784], P < 0.001). More importantly, incorporating PCT with other risk factors into the prediction model significantly increased the AUC for prediction of DIC induced by sepsis shock (0.801vs 0.706; P = 0.012). Conclusions Our study suggests that PCT levels on admission is significantly and independently associated with DIC development subsequently with septic shock, combining PCT levels with other risk factors could significantly improve the prediction of DIC induced by sepsis shock.


Sign in / Sign up

Export Citation Format

Share Document