scholarly journals Statistical Design of Phase II/III Clinical Trials for Testing Therapeutic Interventions in COVID-19 Patients

2020 ◽  
Author(s):  
Shesh Rai ◽  
Chen Qian ◽  
Jianmin Pan ◽  
Anand Seth ◽  
Deo K Srivast ◽  
...  

Abstract Background Due to unknown features of the COVID-19 disease and complexity of the patient population, standard clinical trial designs on treatments may not be optimal in such patients. We propose two independent clinical trials designs based on careful grouping of patient and outcome measures.Methods Using the World Health Organization ordinal scale on patient status, we classify treatable patients (Stages 3-7) into two risk groups. Patients in Stages 3, 4 and 5 are categorized as the intermediate-risk group while patients in Stages 6 and 7 are categorized as the high-risk group. To ensure that an intervention, if deemed efficacious, is promptly made available to vulnerable patients, we propose a group sequential design incorporating four factors stratification, two interim analyses, and a toxicity monitoring rule for the intermediate-risk group. The primary response variable (binary variable) is based on the proportion of patients discharged from hospital by the 15th day. The goal is to detect a meaningful improvement in this response rate. For the high-risk group, we propose a group sequential design incorporating three factors stratification, two interim analyses, and without toxicity monitoring. The primary response variable for this design is the 30 days mortality, and the goal is to detect a meaning reduction in mortality rate.Results Required sample size and toxicity boundaries are calculated for each scenario. Sample size requirements for the designs with interim analyses are marginally greater than the ones without. In addition, for both the intermediate-risk group and the high-risk group, conducting two interim analyses have almost identical required sample size compared with just one interim analysis. Conclusions We recommend using binary outcome with composite endpoints for those in Stages 3, 4 and 5 with a power of 90% to detect an improvement of 20% in response rate, and 30 days mortality rate outcome for those in Stages 6 and 7 with a power of 90% to detect 15% (effect size) reduced mortality rate, in the trial design. For the intermediate-risk group, two interim analyses for efficacy evaluation along with toxicity monitoring are encouraged. For the high-risk group, two interim analyses without toxicity monitoring is advised.

2020 ◽  
Author(s):  
Shesh Rai ◽  
Chen Qian ◽  
Jianmin Pan ◽  
Anand Seth ◽  
Deo Kumar Srivast ◽  
...  

Abstract Background Due to unknown features of the COVID-19 disease and complexity of the patient population, traditional clinical trial designs on treatments may not be optimal in such patients. We propose two independent clinical trials designs based on careful grouping of patient and outcome measures.Methods Using the World Health Organization ordinal scale on patient status, we classify treatable patients (Stages 3-7) into two risk groups. Patients in Stages 3, 4 and 5 are categorized as the intermediate-risk group while patients in Stages 6 and 7 are categorized as the high-risk group. To ensure that an intervention, if deemed efficacious, is promptly made available to vulnerable patients, we propose a group sequential design incorporating four factors stratification, two interim analyses, and a toxicity monitoring rule for the intermediate-risk group. The primary response variable (binary variable) is based on the proportion of patients discharged from hospital by the 15th day. The goal is to detect a meaningful improvement in this response rate. For the high-risk group, we propose a group sequential design incorporating three factors stratification, two interim analyses, and without toxicity monitoring. The primary response variable for this design is the 30 days mortality, and the goal is to detect a meaning reduction in mortality rate.Results Required sample size and toxicity boundaries are calculated for each scenario. Sample size requirements for the designs with interim analyses are marginally greater than the ones without. In addition, for both the intermediate-risk group and the high-risk group, conducting two interim analyses have almost identical required sample size compared with just one interim analysis. Conclusions We recommend using composite endpoints, with binary outcome for those in Stages 3, 4 and 5 with a power of 90% to detect an improvement of 20% in response rate, and 30 days mortality rate outcome for those in Stages 6 and 7 with a power of 90% to detect 15% (effect size) reduced mortality rate, in the trial design. For the intermediate-risk group, two interim analyses for efficacy evaluation along with toxicity monitoring are encouraged. For the high-risk group, two interim analyses without toxicity monitoring is advised.


2020 ◽  
Author(s):  
Shesh Rai ◽  
Chen Qian ◽  
Jianmin Pan ◽  
Anand Seth ◽  
Deo Kumar Srivast ◽  
...  

Abstract Background Researchers around the world are urgently conducting clinical trials to develop new treatments for reducing mortality and morbidity related to COVID-19. However, due to unknown features of the disease and complexity of the patient population, traditional trial designs may not be optimal in such patients. We propose two independent clinical trials designs based on careful grouping of the expected characteristics of patient population. This could serve as a useful guide for researchers designing COVID-19 related Phase II/III trials. Methods Using the commonly utilized World Health Organization ordinal scale on patient status, we classify patients into three risk groups. In this approach, patients in Stages 3, 4 and 5 are categorized as the intermediate-risk group while patients in Stages 6 and 7 are categorized as the high-risk group. To ensure that an intervention, if deemed efficacious, is promptly made available to vulnerable patients, we propose a group sequential design with two interim analyses along with a final analysis and a toxicity monitoring rule for the intermediate-risk group. For the high-risk group, we propose a group sequential design with two interim analyses without toxicity monitoring. Results Based on different response rates, effect sizes, and power, required sample size and toxicity boundaries are calculated for each scenario. Sample size requirements for the designs with interim analyses are only marginally greater than the ones without. In addition, for both the intermediate-risk group and the high-risk group, conducting two interim analyses have identical required sample size compared with just one interim analysis. Additional issues that could potentially impact the trial are discussed. Conclusions We recommend using composite endpoints, with binary outcome for those in Stages 3, 4 and 5 with a power of 90% to detect an improvement of 20% in response rate, and 30 days mortality rate outcome for those in Stages 6 and 7 with a power of 90% to detect 15% (effect size) reduced mortality rate, in the COVID-19 trial design. For the intermediate-risk group, two interim analyses for efficacy evaluation along with toxicity monitoring are encouraged. For the high-risk group, two interim analyses without toxicity monitoring is advised.


Author(s):  
Johannes Korth ◽  
Benjamin Wilde ◽  
Sebastian Dolff ◽  
Jasmin Frisch ◽  
Michael Jahn ◽  
...  

SARS-CoV-2 is a worldwide challenge for the medical sector. Healthcare workers (HCW) are a cohort vulnerable to SARS-CoV-2 infection due to frequent and close contact with COVID-19 patients. However, they are also well trained and equipped with protective gear. The SARS-CoV-2 IgG antibody status was assessed at three different time points in 450 HCW of the University Hospital Essen in Germany. HCW were stratified according to contact frequencies with COVID-19 patients in (I) a high-risk group with daily contacts with known COVID-19 patients (n = 338), (II) an intermediate-risk group with daily contacts with non-COVID-19 patients (n = 78), and (III) a low-risk group without patient contacts (n = 34). The overall seroprevalence increased from 2.2% in March–May to 4.0% in June–July to 5.1% in October–December. The SARS-CoV-2 IgG detection rate was not significantly different between the high-risk group (1.8%; 3.8%; 5.5%), the intermediate-risk group (5.1%; 6.3%; 6.1%), and the low-risk group (0%, 0%, 0%). The overall SARS-CoV-2 seroprevalence remained low in HCW in western Germany one year after the outbreak of COVID-19 in Germany, and hygiene standards seemed to be effective in preventing patient-to-staff virus transmission.


2021 ◽  
Vol 39 (15_suppl) ◽  
pp. 6006-6006
Author(s):  
Trisha Michel Wise-Draper ◽  
Vinita Takiar ◽  
Michelle Lynn Mierzwa ◽  
Keith Casper ◽  
Sarah Palackdharry ◽  
...  

6006 Background: Patients with resected HNSCC, with high-risk (positive margins, extracapsular spread [ECE]) or intermediate-risk pathological features have an estimated 1-year DFS of 65% and 69%, respectively. Immune checkpoint blockade improved survival of patients with recurrent/metastatic HNSCC, and preclinical models indicate radiotherapy (RT) synergizes with anti-PD-1. Therefore, we administered the PD-1 inhibitor pembrolizumab (pembro) pre- and post-surgery with adjuvant RT +/- cisplatin in patients with resectable, locoregionally advanced (clinical T3/4 and/or ≥2 nodal metastases) HNSCC (NCT02641093). Methods: Eligible patients received pembro (200 mg I.V. x 1) 1-3 weeks before resection. Adjuvant pembro (q3 wks x 6 doses) was administered with RT (60-66Gy) with or without weekly cisplatin (40mg/m2 X 6) for patients with high-risk and intermediate-risk features, respectively. The primary endpoint was 1-year DFS estimated by Kaplan Meier curves. Safety was evaluated by CTCAE v5.0. Pathological response (PR) to neoadjuvant pembro was evaluated by comparing pre- and post-surgical tumor specimens for treatment effect (TE), defined as tumor necrosis and/or histiocytic inflammation and giant cell reaction to keratinaceous debris. PR was classified as no (NPR, < 20%), partial (PPR, ≥20% and < 90%) and major (MPR, ≥90%). Tumor PD-L1 immunohistochemistry was performed with 22c3 antibody and reported as combined positive score (CPS). Results: Ninety-two patients were enrolled. Seventy-six patients received adjuvant pembro and were evaluable for DFS. Patient characteristics included: median age 58 (range 27 – 80) years; 32% female; 88% oral cavity, 8% larynx, and 3% human papillomavirus negative oropharynx; 86% clinical T3/4 and 65% ≥2N; 49 (53%) high-risk (positive margins, 45%; ECE, 78%); 64% (44/69 available) had PD-L1 CPS ≥1. At a median follow-up of 20 months, 1-year DFS was 67% (95%CI 0.52-0.85) in the high-risk group and 93% (95%CI 0.84-1) in the intermediate-risk group. Among 80 patients evaluable for PR, TE scoring resulted in 48 NPR, 26 PPR and 6 MPR. Patients with PPR/MPR had significantly improved 1-year DFS when compared with those with NPR (100% versus 68%, p = 0.01; HR = 0.23). PD-L1 CPS ≥ 1 was not independently associated with 1-year DFS, but was highly associated with MPR/PPR (p = 0.0007). PPR/MPR in PD-L1 CPS < 1, ≥1 and ≥20, were estimated as 20, 55 and 90%, respectively. Grade ≥ 3 adverse events occurred in 62% patients with most common including dysphagia (15%), neutropenia (15%), skin/wound infections (10%), and mucositis (9%). Conclusions: PR to neoadjuvant pembro is associated with PD-L1 CPS≥1 and high DFS in patients with resectable, local-regionally advanced, HNSCC. Clinical trial information: NCT02641093.


2019 ◽  
Vol 40 (Supplement_1) ◽  
Author(s):  
T Grinberg ◽  
T Bental ◽  
Y Hammer ◽  
A R Assali ◽  
H Vaknin-Assa ◽  
...  

Abstract Background Following Myocardial Infarction (MI), patients are at increased risk for recurrent cardiovascular events, particularly during the immediate period. Yet some patients are at higher risk than others, owing to their clinical characteristics and comorbidities, these high-risk patients are less often treated with guideline-recommended therapies. Aim To examine temporal trends in treatment and outcomes of patients with MI according to the TIMI risk score for secondary prevention (TRS2°P), a recently validated risk stratification tool. Methods A retrospective cohort study of patients with an acute MI, who underwent percutaneous coronary intervention and were discharged alive between 2004–2016. Temporal trends were examined in the early (2004–2010) and late (2011–2016) time-periods. Patients were stratified by the TRS2°P to a low (≤1), intermediate (2) or high-risk group (≥3). Clinical outcomes included 30-day MACE (death, MI, target vessel revascularization, coronary artery bypass grafting, unstable angina or stroke) and 1-year mortality. Results Among 4921 patients, 31% were low-risk, 27% intermediate-risk and 42% high-risk. Compared to low and intermediate-risk patients, high-risk patients were older, more commonly female, and had more comorbidities such as hypertension, diabetes, peripheral vascular disease, and chronic kidney disease. They presented more often with non ST elevation MI and 3-vessel disease. High-risk patients were less likely to receive drug eluting stents and potent anti-platelet drugs, among other guideline-recommended therapies. Evidently, they experienced higher 30-day MACE (8.1% vs. 3.9% and 2.1% in intermediate and low-risk, respectively, P<0.001) and 1-year mortality (10.4% vs. 3.9% and 1.1% in intermediate and low-risk, respectively, P<0.001). During time, comparing the early to the late-period, the use of potent antiplatelets and statins increased among the entire cohort (P<0.001). However, only the high-risk group demonstrated a significantly lower 30-day MACE (P=0.001). During time, there were no differences in 1-year mortality rate among all risk categories. Temporal trends in 30-day MACE by TRS2°P Conclusion Despite a better application of guideline-recommended therapies, high-risk patients after MI are still relatively undertreated. Nevertheless, they demonstrated the most notable improvement in outcomes over time.


2020 ◽  
Vol 9 (7) ◽  
pp. 2057
Author(s):  
Vanja Ristovic ◽  
Sophie de Roock ◽  
Thierry G. Mesana ◽  
Sean van Diepen ◽  
Louise Y. Sun

Background: Despite steady improvements in cardiac surgery-related outcomes, our understanding of the physiologic mechanisms leading to perioperative mortality remains incomplete. Intraoperative hypotension is an important risk factor for mortality after noncardiac surgery but remains relatively unexplored in the context of cardiac surgery. We examined whether the association between intraoperative hypotension and in-hospital mortality varied by patient and procedure characteristics, as defined by the validated Cardiac Anesthesia Risk Evaluation (CARE) mortality risk score. Methods: We conducted a retrospective cohort study of consecutive adult patients who underwent cardiac surgery requiring cardiopulmonary bypass (CPB) from November 2009–March 2015. Those who underwent off-pump, thoracic aorta, transplant and ventricular assist device procedures were excluded. The primary outcome was in-hospital mortality. Hypotension was categorized by mean arterial pressure (MAP) of <55 and between 55–64 mmHg before, during and after CPB. The relationship between hypotension and death was modeled using multivariable logistic regression in the intermediate and high-risk groups. Results: Among 6627 included patients, 131 (2%) died in-hospital. In-hospital mortality in patients with CARE scores of 1, 2, 3, 4 and 5 was 0 (0%), 7 (0.3%), 35 (1.3%), 41 (4.6%) and 48 (13.6%), respectively. In the intermediate-risk group (CARE = 3–4), MAP < 65 mmHg post-CPB was associated with increased odds of death in a dose-dependent fashion (adjusted OR 1.30, 95% CI 1.13–1.49, per 10 min exposure to MAP < 55 mmHg, p = 0.002; adjusted OR 1.18 [1.07–1.30] per 10 min exposure to MAP 55–64 mmHg, p = 0.001). We did not observe an association between hypotension and mortality in the high-risk group (CARE = 5). Conclusions: Post-CPB hypotension is a potentially modifiable risk factor for mortality in intermediate-risk patients. Our findings provide impetus for clinical trials to determine if hemodynamic goal-directed therapies could improve survival in these patients.


Blood ◽  
2016 ◽  
Vol 128 (22) ◽  
pp. 534-534
Author(s):  
Natasha Catherine Edwin ◽  
Jesse Keller ◽  
Suhong Luo ◽  
Kenneth R Carson ◽  
Brian F. Gage ◽  
...  

Abstract Background Patients with multiple myeloma (MM) have a 9-fold increased risk of developing venous thromboembolism (VTE). Current guidelines recommend pharmacologic thromboprophylaxis in patients with MM receiving an immunomodulatory agent in the presence of additional VTE risk factors (NCCN 2015, ASCO 2014, ACCP 2012). However, putative risk factors vary across guidelines and no validated VTE risk tool exists for MM. Khorana et al. developed a VTE risk score in patients with solid organ malignancies and lymphoma (Blood, 2008). We sought to apply the Khorana et al. score in a population with MM. Methods We identified patients diagnosed with MM within the Veterans Health Administration (VHA) between September 1, 1999 and December 31, 2009 using the International Classification of Diseases (ICD)-03 code 9732/3. We followed the cohort through October 2014. To eliminate patients with monoclonal gammopathy of undetermined significance and smoldering myeloma, we excluded patients who did not receive MM-directed therapy within 6 months of diagnosis. We also excluded patients who did not have data for hemoglobin (HGB), platelet (PLT) count, white blood count (WBC), height and weight, as these are all variables included in the Khorana et al. risk model. Height and weight were assessed within one month of diagnosis and used to calculate body mass index (BMI). We measured HGB, PLT count, and WBC count prior to treatment initiation: within two months of MM diagnosis. A previously validated algorithm, using a combination of ICD-9 code for VTE plus pharmacologic treatment for VTE or IVC filter placement, identified patients with incident VTE after MM diagnosis (Thromb Res, 2015). The study was approved by the Saint Louis VHA Medical Center and Washington University School of Medicine institutional review boards. We calculated VTE risk using the Khorana et al. score: We assigned 1 point each for: PLT ≥ 350,000/μl, HGB < 10 g/dl, WBC > 11,000/μl, and BMI ≥ 35 kg/m2. Patients with 0 points were at low-risk, 1-2 points were considered intermediate-risk and ≥3 points were termed high-risk for VTE. We assessed the relationship between risk-group and development of VTE using logistic regression at 3- and 6-months. We tested model discrimination using the area under the receiver operating characteristic curve (concordance statistic, c) with a c-statistic range of 0.5 (no discriminative ability) to 1.0 (perfect discriminative ability). Results We identified 1,520 patients with MM: 16 were high-risk, 802 intermediate-risk, and 702 low-risk for VTE using the scoring system in the Khorana et al. score. At 3-months of follow-up, a total of 76 patients developed VTE: 27 in the low-risk group, 48 in the intermediate-risk group, and 1 in the high-risk group. At 6-months of follow-up there were 103 incident VTEs: 41 in the low-risk group, 61 in the intermediate-risk group, and 1 in the high-risk group. There was no significant difference between risk of VTE in the high- or intermediate-risk groups versus the low-risk group (Table 1). The c-statistic was 0.56 at 3-months and 0.53 at 6-months (Figure 1). Conclusion Previously, the Khorana score was developed and validated to predict VTE in patients with solid tumors. It was not a strong predictor of VTE risk in MM. There is a need for development of a risk prediction model in patients with MM. Figure 1. Figure 1. Disclosures Carson: American Cancer Society: Research Funding. Gage:National Heart, Lung and Blood Institute: Research Funding. Kuderer:Janssen Scientific Affairs, LLC: Consultancy, Honoraria. Sanfilippo:National Heart, Lung and Blood Institute: Research Funding.


2007 ◽  
Vol 25 (18_suppl) ◽  
pp. 11067-11067 ◽  
Author(s):  
H. Patel ◽  
K. Hook ◽  
C. Kaplan ◽  
R. Davidson ◽  
A. DeMichele ◽  
...  

11067 Background: The 21 gene RT-PCR assay Oncotype DX (Genomic Health, CA) stratifies patients into low, intermediate and high risk for systemic recurrence. The objective of this study was to examine the patterns of use of Oncotype DX in a single institution. Methods: All patients who had ODX testing requested by the University of Pennsylvania were identified and recurrence scores (RS) obtained. Patient and tumor characteristics, as well as treatment administered, were obtained by chart review for analysis. Results: 100 ODX tests were ordered between 1/1/05–11/30/06. RS results classified 51% of breast cancers as low risk, 38% intermediate risk, and 11% high risk. Characteristics of the tumors of the overall population and by RS group are shown in Table . 99% of patients received hormonal therapy. Of the low risk patients, only one patient was treated with chemotherapy (2%) while 34% of the intermediate risk group and 80% of the high risk group received chemotherapy. Notably, only 4/100 patients with ODX were under age 35 and 17/100 had tumors over 2cm. Conclusions: In this series, ODX use is accelerating. The results of the ODX tests appear to be used clinically as demonstrated by the very low use of chemotherapy in the low risk group. Comparison to the overall population of ER positive, node negative patients seen at this institution is underway. [Table: see text] No significant financial relationships to disclose.


Stroke ◽  
2021 ◽  
Vol 52 (Suppl_1) ◽  
Author(s):  
Michael Omoniyi Ayanbadejo ◽  
Nancy M Stoll ◽  
Syntyia Taylor ◽  
Lee R Guterman

Background: Minorities in the United States have disproportionately higher risk of stroke, early-onset, and more severe strokes than non-Hispanic Whites. Hypertension (HTN) is an independent and modifiable risk factor for stroke. Recent prevalence estimates of HTN in minorities in Niagara Region is not available. Health campaign to barbershops is well-established to be effective for HTN management in black men. This pilot study was conducted to screen for HTN in Barbershops to determine the prevalence of HTN in black men in the Niagara Region. Methods: Barbershops were randomly selected by convenience, and patron’s participation (≥18 years) were voluntary. Blood pressure (BP) screening and stroke education campaign were conducted concurrently in partnership with 7 barbershop owners in the Niagara Region from September 13, 2019 to February 10, 2020. Participant’s age, race, gender, and BP using automated BP machine were recorded. BP readings were stratified into 3 groups based on severity: High risk (BP ≥ 140/90mmHg), intermediate risk/caution (BP 120-139/80-89mmHg) and low risk (BP ≤ 120/< 80mmHg). Hypertension was defined as BP ≥ 140/90. Data was stored in Excel and analysis performed with SPSS (Statistical Package for the Social Sciences). Results: Of the 57 that participated in this study, approximately 75.4% (n=43) were male; 89.4% (n=51) were Black, 5.3% (n=3) were Hispanic and 5.3% (n=3) were other race/ethnicity. Participants’ ages ranged between 18-71 years with a mean age of 36.4 years (95% CI [32.9, 39.8]). Mean systolic BP was 132.14 mmHg (95% CI [128.00, 136.28]) and mean diastolic BP was 86.35 mmHg (95% CI [81.21, 91.50). Approximately 70.0% of total participants were in the high and intermediate risk group categories, with participants younger patients (i.e. age ≤ 40 years) accounted for 73.0% of the high risk group. Conclusion: The prevalence of high blood pressure among minorities in Niagara Region is high and above previous estimates reported in the 2017 ACC/AHA guideline (41% to 55%). Barbershops may provide future opportunities for screening and recruiting subjects for interventions that reduce BP and its risk factors. Further studies should be conducted in larger populations to reduce the uncertainties around the prevalence estimate of HTN


Blood ◽  
2009 ◽  
Vol 114 (22) ◽  
pp. 2776-2776
Author(s):  
Andrea Kuendgen ◽  
Corinna Strupp ◽  
Kathrin Nachtkamp ◽  
Barbara Hildebrandt ◽  
Rainer Haas ◽  
...  

Abstract Abstract 2776 Poster Board II-752 Introduction: We wondered whether prognostic factors have similar relevance in different subpopulations of MDS patients. Methods: Our analysis was based on patients with primary, untreated MDS, including 181 RA, 169 RARS, 649 RCMD, 322 RSCMD, 79 5q-syndromes, 290 RAEB I, 324 RAEB II, 266 CMML I, 64 CMML II, and 209 RAEB-T. The impact of prognostic variables in univariate analysis was compared in subpopulations of patients defined by medullary blast count, namely <5%, ≥5% (table), ≥10%, and ≥20% (not shown), as well as 3 subpopulations defined by the cytogenetic risk groups according to IPSS (table). Multivariate analysis of prognostic factors was performed for cytogenetically defined subgroups and WHO-subtypes. Results: Strong prognostic factors in all blast-defined subgroups were hemoglobin, transfusion dependency, increased WBC, age, and LDH. However, all variables became less important in patients with ≥20% blasts (RAEB-T) and increased WBC was rare. Platelet count and cytogenetic risk groups were relevant in patients with <5%, ≥5%, and ≥10% marrow blasts, but not in RAEB-T. Marrow fibrosis was important in patients with <5% or ≥5% blasts, but not ≥10%. Gender and ANC <1000/μl were significant only in patients with a normal blast count. Furthermore, we looked for the effect of the karyotypes, relevant for IPSS scoring (-Y, del5q, del20q, others, del7q/-7, complex), and found a comparable influence on survival, irrespective whether patients had < or ≥5% marrow blasts. In subpopulations defined by cytogenetic risk groups, several prognostic factors were highly significant in univariate analysis, if patients had a good risk karyotype. These included hemoglobin, sex, age, LDH, increased WBC, transfusion need, and blast count (cut-offs 5%, 10%, and 20%). In the intermediate risk group only LDH, platelets, WBC, and blasts were significant prognostic factors, while in the high risk group only platelets and blast count remained significant. Multivariate analysis was performed for the cytogenetic risk groups and for subgroups defined by WHO subtypes. The analysis considered blast count (</≥5%), hemoglobin, platelets, ANC, cytogenetic risk group, transfusion need, sex, and age. In the subgroup including RA, RARS, and 5q-syndrome, LDH, transfusion, and age in descending order were independent prognostic parameters. In the RCMD+RSCMD group, karyotype, age, transfusion, and platelets were relevant factors. In the RAEB I+II subgroup, the order was hemoglobin, karyotype, age, and platelets, while in CMML I+II only hemoglobin had independent influence. In RAEB-T none of the factors examined was of independent significance. Looking at cytogenetic risk groups, in the favorable group, several variables independently influenced survival, namely transfusion, blasts, age, sex, and LDH (in this order). Interestingly, in the intermediate and high risk group, only blast count and platelets retained a significant impact. Conclusion: Univariate analysis showed prognostic factors (except ANC) included in IPSS and WPSS are relevant in most subgroups defined by marrow blast percentage. However, they all lose their impact if the blast count exceeds 20%. Regarding cytogenetic risk groups, several prognostic factors lose their influence already in the intermediate risk group. This underscores the prognostic importance of MDS cytogenetics. Multivariate analysis showed MDS subpopulations defined by WHO types also differ with regard to prognostic factors. In particular, CMML and RAEB-T stand out against the other MDS types. Disclosures: Kuendgen: Celgene: Honoraria. Hildebrandt:Celgene: Research Funding. Gattermann:Novartis: Honoraria, Participation in Advisory Boards on deferasirox clinical trials. Germing:Novartis, Celgene: Honoraria, Research Funding.


Sign in / Sign up

Export Citation Format

Share Document