scholarly journals Developing a composite outcome measure for frailty prevention trials – rationale, derivation and sample size comparison with other candidate measures

2020 ◽  
Author(s):  
Miles D. Witham ◽  
James Wason ◽  
Richard M Dodds ◽  
Avan A Sayer

Abstract Introduction Frailty is the loss of ability to withstand a physiological stressor, and is associated with multiple adverse outcomes in older people. Trials to prevent or ameliorate frailty are in their infancy. A range of different outcome measures have been proposed, but current measures require either large sample sizes, long follow-up, or do not directly measure the construct of frailty. Methods We propose a composite outcome for frailty prevention trials, comprising progression to the frail state, death, or being too unwell to continue in a trial. To determine likely event rates, we used data from the English Longitudinal Study for Ageing, collected 4 years apart. We calculated transition rates between non-frail, prefrail, frail or loss to follow up due to death or illness. We used Markov state transition models to interpolate one- and two-year transition rates, and performed sample size calculations for a range of differences in transition rates using simple and composite outcomes. Results The frailty category was calculable for 4650 individuals at baseline (2226 non-frail, 1907 prefrail, 517 frail); at follow up, 1282 were non-frail, 1108 were prefrail, 318 were frail and 1936 had dropped out or were unable to complete all tests for frailty. Transition probabilities for those prefrail at baseline, measured at wave 4 were respectively 0.176, 0.286, 0.096 and 0.442 to non-frail, prefrail, frail and dead/dropped out. Interpolated transition probabilities were 0.159, 0.494, 0.113 and 0.234 at two years, and 0.108, 0.688, 0.087 and 0.117 at one year. Required sample sizes for a two-year outcome were between 1000 and 7200 for transition from prefrailty to frailty alone, 250 to 1600 for transition to the composite measure, and 75 to 350 using the composite measure with an ordinal logistic regression approach. Conclusion Use of a composite outcome for frailty trials offers reduced sample sizes and could ameliorate the effect of high loss to follow up inherent in such trials due to death and illness.

2019 ◽  
Author(s):  
Miles D. Witham ◽  
James Wason ◽  
Richard M Dodds ◽  
Avan A Sayer

Abstract Introduction Frailty is the loss of ability to withstand a physiological stressor, and is associated with multiple adverse outcomes in older people. Trials to prevent or ameliorate frailty are in their infancy. A range of different outcome measures have been proposed, but current measures require either large sample sizes, long follow-up, or do not directly measure the construct of frailty. Methods We propose a composite outcome for frailty prevention trials, comprising progression to the frail state, death, or being too unwell to continue in a trial. To determine likely event rates, we used data from the English Longitudinal Study for Ageing, collected 4 years apart. We calculated transition rates between non-frail, prefrail, frail or loss to follow up due to death or illness. We used Markov state transition models to interpolate one- and two-year transition rates, and performed sample size calculations for a range of differences in transition rates using simple and composite outcomes. Results The frailty category was calculable for 4650 individuals at baseline (2226 non-frail, 1907 prefrail, 517 frail); at follow up, 1282 were non-frail, 1108 were prefrail, 318 were frail and 1936 had dropped out or were unable to complete all tests for frailty. Transition probabilities for those prefrail at baseline, measured at wave 4 were respectively 0.176, 0.286, 0.096 and 0.442 to non-frail, prefrail, frail and dead/dropped out. Interpolated transition probabilities were 0.159, 0.494, 0.113 and 0.234 at two years, and 0.108, 0.688, 0.087 and 0.117 at one year. Required sample sizes for a two-year outcome were between 1000 and 7200 for transition from prefrailty to frailty alone, 250 to 1600 for transition to the composite measure, and 75 to 350 using the composite measure with an ordinal logistic regression approach. Conclusion Use of a composite outcome for frailty trials offers reduced sample sizes and could ameliorate the effect of high loss to follow up inherent in such trials due to death and illness.


2020 ◽  
Author(s):  
Miles D. Witham ◽  
James Wason ◽  
Richard M Dodds ◽  
Avan A Sayer

Abstract Background: Frailty is the loss of ability to withstand a physiological stressor and is associated with multiple adverse outcomes in older people. Trials to prevent or ameliorate frailty are in their infancy. A range of different outcome measures have been proposed, but current measures require either large sample sizes, long follow-up, or do not directly measure the construct of frailty. Methods: We propose a composite outcome for frailty prevention trials, comprising progression to the frail state, death, or being too unwell to continue in a trial. To determine likely event rates, we used data from the English Longitudinal Study for Ageing, collected 4 years apart. We calculated transition rates between non-frail, prefrail, frail or loss to follow up due to death or illness. We used Markov state transition models to interpolate one- and two-year transition rates and performed sample size calculations for a range of differences in transition rates using simple and composite outcomes. Results: The frailty category was calculable for 4650 individuals at baseline (2226 non-frail, 1907 prefrail, 517 frail); at follow up, 1282 were non-frail, 1108 were prefrail, 318 were frail and 1936 had dropped out or were unable to complete all tests for frailty. Transition probabilities for those prefrail at baseline, measured at wave 4 were respectively 0.176, 0.286, 0.096 and 0.442 to non-frail, prefrail, frail and dead/dropped out. Interpolated transition probabilities were 0.159, 0.494, 0.113 and 0.234 at two years, and 0.108, 0.688, 0.087 and 0.117 at one year. Required sample sizes for a two-year outcome in a two-arm trial were between 1040 and 7242 for transition from prefrailty to frailty alone, 246 to 1630 for transition to the composite measure, and 76 to 354 using the composite measure with an ordinal logistic regression approach. Conclusion: Use of a composite outcome for frailty trials offers reduced sample sizes and could ameliorate the effect of high loss to follow up inherent in such trials due to death and illness.


2019 ◽  
Vol 16 (2) ◽  
pp. 111-119 ◽  
Author(s):  
Ron Brookmeyer ◽  
Nada Abdalla

Background/Aims Clinical trials for Alzheimer’s disease have been aimed primarily at persons who have cognitive symptoms at enrollment. However, researchers are now recognizing that the pathophysiological process of Alzheimer’s disease begins years, if not decades, prior to the onset of clinical symptoms. Successful intervention may require intervening early in the disease process. Critical issues arise in designing clinical trials for primary and secondary prevention of Alzheimer’s disease including determination of sample sizes and follow-up duration. We address a number of these issues through application of a unifying multistate model for the preclinical course of Alzheimer’s disease. A multistate model allows us to specify at which points during the long disease process the intervention exerts its effects. Methods We used a nonhomogeneous Markov multistate model for the progression of Alzheimer’s disease through preclinical disease states defined by biomarkers, mild cognitive impairment and Alzheimer’s disease dementia. We used transition probabilities based on several published cohort studies. Sample size methods were developed that account for factors including the initial preclinical disease state of trial participants, the primary endpoint, age-dependent transition and mortality rates and specifications of which transition rates are the targets of the intervention. Results We find that Alzheimer’s disease prevention trials with a clinical primary endpoint of mild cognitive impairment or Alzheimer’s disease dementia will require sample sizes of the order many thousands of individuals with at least 5 years of follow-up, which is larger than most Alzheimer’s disease therapeutic trials conducted to date. The reasons for the large trial sizes include the long and variable preclinical period that spans decades, high rates of attrition among elderly populations due to mortality and losses to follow-up and potential selection effects, whereby healthier subjects enroll in prevention trials. A web application is available to perform sample size calculations using the methods reported here. Conclusion Sample sizes based on multistate models can account for the points in the disease process when interventions exert their effects and may lead to more accurate sample size determinations. We will need innovative strategies to help design Alzheimer’s disease prevention trials with feasible sample size requirements and durations of follow-up.


BMJ Open ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. e033510 ◽  
Author(s):  
Ayako Okuyama ◽  
Matthew Barclay ◽  
Cong Chen ◽  
Takahiro Higashi

ObjectivesThe accuracy of the ascertainment of vital status impacts the validity of cancer survival. This study assesses the potential impact of loss-to-follow-up on survival in Japan, both nationally and in the samples seen at individual hospitals.DesignSimulation studySetting and participantsData of patients diagnosed in 2007, provided by the Hospital-Based Cancer Registries of 177 hospitals throughout Japan.Primary and secondary outcome measuresWe performed simulations for each cancer site, for sample sizes of 100, 1000 and 8000 patients, and for loss-to-follow-up ranging from 1% to 5%. We estimated the average bias and the variation in bias in survival due to loss-to-follow-up.ResultsThe expected bias was not associated with the sample size (with 5% loss-to-follow-up, about 2.1% for the cohort including all cancers), but a smaller sample size led to more variable bias. Sample sizes of around 100 patients, as may be seen at individual hospitals, had very variable bias: with 5% loss-to-follow-up for all cancers, 25% of samples had a bias of <1.02% and 25% of samples had a bias of > 3.06%.ConclusionSurvival should be interpreted with caution when loss-to-follow-up is a concern, especially for poor-prognosis cancers and for small-area estimates.


2020 ◽  
Vol 3 ◽  
pp. 82
Author(s):  
Robert Murphy ◽  
Emer McGrath ◽  
Aoife Nolan ◽  
Andrew Smyth ◽  
Michelle Canavan ◽  
...  

Background: A run-in period is often employed in randomised controlled trials to increase adherence to the intervention and reduce participant loss to follow-up in the trial population. However, it is uncertain whether use of a run-in period affects the magnitude of treatment effect. Methods: We will conduct a sensitive search for systematic reviews of cardiovascular preventative trials and a complete meta-analysis of treatment effects comparing cardiovascular prevention trials using a run-in period (“run-in trials”) with matched cardiovascular prevention trials that did not use a run-in period (“non-run-in trials”). We describe a comprehensive matching process which will match run-in trials with non-run-in trials by patient populations, interventions, and outcomes. For each pair of run-in trial and matched non-run-in trial(s), we will estimate the ratio of relative risks and 95% confidence interval. We will evaluate differences in treatment effect between run-in and non-run-in trials and our and our priamry outcome will be the ratio of relative risks for matched run-in and non-run-in trials for their reported cardiovascular composite outcome. Our secondary outcomes are comparisons of mortality, loss to follow up, frequency of adverse events and methodological quality of trials. Conclusions: This study will answer a key question about what influence a run-in period has on the magnitude of treatment effects in randomised controlled trials for cardiovascular prevention therapies.


Author(s):  
Patrick Royston ◽  
Abdel Babiker

We present a menu-driven Stata program for the calculation of sample size or power for complex clinical trials with a survival time or a binary outcome. The features supported include up to six treatment arms, an arbitrary time-to-event distribution, fixed or time-varying hazard ratios, unequal patient allocation, loss to follow-up, staggered patient entry, and crossover of patients from their allocated treatment to an alternative treatment. The computations of sample size and power are based on the logrank test and are done according to the asymptotic distribution of the logrank test statistic, adjusted appropriately for the design features.


2012 ◽  
Vol 2012 ◽  
pp. 1-9 ◽  
Author(s):  
April J. Bell ◽  
Kara Wools-Kaloustian ◽  
Sylvester Kimaiyo ◽  
Hai Liu ◽  
Adrian Katschke ◽  
...  

Background. There was a 6-month shortage of antiretrovirals (cART) in Kenya.Methods. We assessed morbidity, mortality, and loss to follow-up (LTFU) in this retrospective analysis of adults who were enrolled during the six-month period with restricted cART (cap) or the six months prior (pre-cap) and eligible for cART at enrollment by the pre-cap standard. Cox models were used to adjust for potential confounders.Results. 9009 adults were eligible for analysis: 4,714 pre-cap and 4,295 during the cap. Median number of days from enrollment to cART initiation was 42 pre-cap and 56 for the cap (P<0.001). After adjustment, individuals in the cap were at higher risk of mortality (HR=1.21; 95% CI : 1.06–1.39) and LTFU (HR=1.12; 95% CI : 1.04–1.22). There was no difference between the groups in their risk of developing a new AIDS-defining illness (HR=0.9295% CI 0.82–1.03).Conclusions. Rationing of cART, even for a relatively short period of six months, led to clinically adverse outcomes.


2021 ◽  
Vol 2 (4) ◽  
Author(s):  
R Mukherjee ◽  
N Muehlemann ◽  
A Bhingare ◽  
G W Stone ◽  
C Mehta

Abstract Background Cardiovascular trials increasingly require large sample sizes and long follow-up periods. Several approaches have been developed to optimize sample size such as adaptive group sequential trials, samples size re-estimation based on the promising zone, and the win ratio. Traditionally, the log-rank or the Cox proportional hazards model is used to test for treatment effects, based on a constant hazard rate and proportional hazards alternatives, which however, may not always hold. Large sample sizes and/or long follow up periods are especially challenging for trials evaluating the efficacy of acute care interventions. Purpose We propose an adaptive design wherein using interim data, Bayesian computation of predictive power guides the increase in sample size and/or the minimum follow-up duration. These computations do not depend on the constant hazard rate and proportional hazards assumptions, thus yielding more robust interim decision making for the future course of the trial. Methods PROTECT IV is designed to evaluate mechanical circulatory support with the Impella CP device vs. standard of care during high-risk PCI. The primary endpoint is a composite of all-cause death, stroke, MI or hospitalization for cardiovascular causes with initial minimum follow-up of 12 months and initial enrolment of 1252 patients with expected recruitment in 24 months. The study will employ an adaptive increase in sample size and/or minimum follow-up at the Interim analysis when ∼80% of patients have been enrolled. The adaptations utilize extensive simulations to choose a new sample size up to 2500 and new minimal follow-up time up to 36 months that provides a Bayesian predictive power of 85%. Bayesian calculations are based on patient-level information rather than summary statistics therefore enabling more reliable interim decisions. Constant or proportional hazard assumptions are not required for this approach because two separate Piece-wise Constant Hazard Models with Gamma-priors are fitted to the interim data. Bayesian predictive power is then calculated using Monte-Carlo methodology. Via extensive simulations, we have examined the utility of the proposed design for situations with time varying hazards and non-proportional hazards ratio such as situations of delayed treatment effect (Figure) and crossing of survival curves. The heat map of Bayesian predictive power obtained when the interim Kaplan-Meier curves reflected delayed response shows that for this scenario an optimal combination of increased sample size and increased follow-up time would be needed to attain 85% predictive power. Conclusion A proposed adaptive design with sample size and minimum follow-up period adaptation based on Bayesian predictive power at interim looks allows for de-risking the trial of uncertainties regarding effect size in terms of control arm outcome rate, hazard ratio, and recruitment rate. Funding Acknowledgement Type of funding sources: Private company. Main funding source(s): Abiomed, Inc Figure 1


2021 ◽  
Vol 39 (15_suppl) ◽  
pp. 9099-9099
Author(s):  
Carissa Jones ◽  
Rebecca Lachs ◽  
Emma Sturgill ◽  
Amanda Misch ◽  
Caressa Lietman ◽  
...  

9099 Background: The development of CPIs and driver-targeted TKIs has transformed the treatment of NSCLC and increased survival rates. However, the role of CPIs in patients with oncogenic-driven NSCLC remains an area of investigation. We sought to examine the impact of CPI sequence on treatment response among patients with oncogenic-driver mutation-positive NSCLC. Methods: Patients with NSCLC being treated within the Sarah Cannon Research Institute network were identified through Genospace, Sarah Cannon’s clinico-genomic analytics platform. Advanced stage oncogenic-driven tumors (driver+) were defined as those with a record of receiving an FDA-approved TKI targeting EGFR, ALK, RET, ROS1, NTRK, MET, or BRAF. Kaplan-Meier estimates were used to examine TTF (defined as time from therapy start to start of next therapy, death, or loss to follow-up) and overall survival (OS). Results: We identified 12,352 patients with lung cancer and available therapy data (2005-2020), including 2,270 (18%) driver+ patients. Eleven percent (N=245) of driver+ patients received a CPI, including 120 (49%) with CPI prior to TKI, 122 (50%) with CPI post TKI, and 3 (1%) who received CPI both pre and post TKI. The CPI TTF was significantly longer for those who received CPI post TKI compared to those who received it prior (Table). EGFR+ tumors accounted for 82% (N=1,867) of driver+ patients, 10% of whom (N=188) received a CPI. Of the EGFR+/CPI+ patients, 78 patients (41%) received CPI prior to TKI, 107 (57%) received CPI post TKI, and 3 (2%) received CPI both pre and post TKI. EGFR+ tumors exposed to a CPI post TKI had a longer CPI TTF compared to patients who received it prior (Table). In contrast, there was no difference in length of benefit from TKI if it was received pre vs. post CPI (Table). There was also no difference in OS based on sequence of TKI and CPI (p=0.88). Larger sample sizes are needed for analysis of additional driver-stratified cohorts. Conclusions: Patients with oncogenic-driven NSCLC benefited from CPI longer when it was administered after TKI compared to before. Importantly, therapy sequence only affected length of benefit from CPIs and did not affect length of benefit from TKIs. This effect was present in EGFR+ NSCLC, but sample sizes were too small to determine if the same is true for other oncogenic-drivers. Therapy sequence had no impact on OS, indicating the presence of additional clinical, therapeutic, and/or genomic factors contributing to disease progression. Continued research is needed to better understand markers of CPI response in driver+ NSCLC.[Table: see text]


Sign in / Sign up

Export Citation Format

Share Document