SHARP BOUNDS ON THE DISTRIBUTION OF TREATMENT EFFECTS AND THEIR STATISTICAL INFERENCE

2009 ◽  
Vol 26 (3) ◽  
pp. 931-951 ◽  
Author(s):  
Yanqin Fan ◽  
Sang Soo Park

In this paper, we propose nonparametric estimators of sharp bounds on the distribution of treatment effects of a binary treatment and establish their asymptotic distributions. We note the possible failure of the standard bootstrap with the same sample size and apply the fewer-than-nbootstrap to making inferences on these bounds. The finite sample performances of the confidence intervals for the bounds based on normal critical values, the standard bootstrap, and the fewer-than-nbootstrap are investigated via a simulation study. Finally we establish sharp bounds on the treatment effect distribution when covariates are available.

Biometrika ◽  
2020 ◽  
Author(s):  
Oliver Dukes ◽  
Stijn Vansteelandt

Summary Eliminating the effect of confounding in observational studies typically involves fitting a model for an outcome adjusted for covariates. When, as often, these covariates are high-dimensional, this necessitates the use of sparse estimators, such as the lasso, or other regularization approaches. Naïve use of such estimators yields confidence intervals for the conditional treatment effect parameter that are not uniformly valid. Moreover, as the number of covariates grows with the sample size, correctly specifying a model for the outcome is nontrivial. In this article we deal with both of these concerns simultaneously, obtaining confidence intervals for conditional treatment effects that are uniformly valid, regardless of whether the outcome model is correct. This is done by incorporating an additional model for the treatment selection mechanism. When both models are correctly specified, we can weaken the standard conditions on model sparsity. Our procedure extends to multivariate treatment effect parameters and complex longitudinal settings.


2017 ◽  
Vol 6 (4) ◽  
pp. 135
Author(s):  
Hamza Dhaker ◽  
Papa Ngom ◽  
Malick Mbodj

This article is devoted to the study of overlap measures of densities of two exponential populations. Various Overlapping Coefficients, namely: Matusita’s measure ρ, Morisita’s measure λ and Weitzman’s measure ∆. A new overlap measure Λ based on Kullback-Leibler measure is proposed. The invariance property and a method of statistical inference of these coefficients also are presented. Taylor series approximation are used to construct confidence intervals for the overlap measures. The bias and mean square error properties of the estimators are studied through a simulation study.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Kim Jachno ◽  
Stephane Heritier ◽  
Rory Wolfe

Abstract Background Non-proportional hazards are common with time-to-event data but the majority of randomised clinical trials (RCTs) are designed and analysed using approaches which assume the treatment effect follows proportional hazards (PH). Recent advances in oncology treatments have identified two forms of non-PH of particular importance - a time lag until treatment becomes effective, and an early effect of treatment that ceases after a period of time. In sample size calculations for treatment effects on time-to-event outcomes where information is based on the number of events rather than the number of participants, there is crucial importance in correct specification of the baseline hazard rate amongst other considerations. Under PH, the shape of the baseline hazard has no effect on the resultant power and magnitude of treatment effects using standard analytical approaches. However, in a non-PH context the appropriateness of analytical approaches can depend on the shape of the underlying hazard. Methods A simulation study was undertaken to assess the impact of clinically plausible non-constant baseline hazard rates on the power, magnitude and coverage of commonly utilized regression-based measures of treatment effect and tests of survival curve difference for these two forms of non-PH used in RCTs with time-to-event outcomes. Results In the presence of even mild departures from PH, the power, average treatment effect size and coverage were adversely affected. Depending on the nature of the non-proportionality, non-constant event rates could further exacerbate or somewhat ameliorate the losses in power, treatment effect magnitude and coverage observed. No single summary measure of treatment effect was able to adequately describe the full extent of a potentially time-limited treatment benefit whilst maintaining power at nominal levels. Conclusions Our results show the increased importance of considering plausible potentially non-constant event rates when non-proportionality of treatment effects could be anticipated. In planning clinical trials with the potential for non-PH, even modest departures from an assumed constant baseline hazard could appreciably impact the power to detect treatment effects depending on the nature of the non-PH. Comprehensive analysis plans may be required to accommodate the description of time-dependent treatment effects.


2007 ◽  
Vol 25 (18_suppl) ◽  
pp. 6513-6513
Author(s):  
R. A. Wilcox ◽  
G. H. Guyatt ◽  
V. M. Montori

6513 Background: Investigators finding a large treatment effect in an interim analysis may terminate a randomized trial (RCT) earlier than planned. A systematic review (Montori et. al., JAMA 2005; 294: 2203–2209) found that RCTs stopped early for benefit are poorly reported and may overestimate the true treatment affect. The extent to which RCTs in oncology stopped early for benefit share similar concerns remains unclear. Methods: We selected RCTs in oncology which had been reported in the original systematic review and reviewed the study characteristics, features related to the decision to monitor and stop the study early (sample size, interim analyses, monitoring and stopping rules), and the number of events and the estimated treatment effects. Results: We found 29 RCTs in malignant hematology (n=6) and oncology (n=23), 52% published in 2000–2004 and 41% in 3 high-impact medical journals (New England Journal of Medicine, Lancet, JAMA). The majority (79%) of trials reported a planned sample size and, on average, recruited 67% of the planned sample size (SD 31%). RCTs reported (1) the planned sample size (n=20), (2) the interim analysis at which the study was terminated (n=16), and (3) whether the decision to stop the study prematurely was informed by a stopping rule (n=16); only 13 reported all three. There was a highly significant correlation between the number of events and the treatment effect (r=0.68, p=0.0007). The odds of finding a large treatment effect (a relative risk < median of 0.54, IQR 0.3–0.7) when studies stopped after a few events (no. events < median of 54 events, IQR 22–125) was 6.2 times greater than when studies stopped later. Conclusions: RCTs in oncology stopped early for benefit tend to report large treatment effects that may overestimate the true treatment effect, particularly when the number of events driving study termination is small. Also, information pertinent to the decision to stop early was inconsistently reported. Clinicians and policymakers should interpret such studies with caution, especially when information about the decision to stop early is not provided and few events occurred. No significant financial relationships to disclose.


1993 ◽  
Vol 9 (2) ◽  
pp. 263-282 ◽  
Author(s):  
In Choi

Using the asymptotic normality of the least-squares estimates for the autoregressive (AR) process with real, positive unit roots and at least one stable root, we consider the asymptotic distributions of the Wald and t ratio tests on AR coefficients. In addition, we propose a method of constructing confidence intervals for the sum of AR coefficients possibly in the presence of a unit root. Using simulation methods, we compare the finite-sample cumulative distributions of the t ratios for individual autoregressive coefficients with those of standard normal distributions, and investigate the finite-sample performance of our confidence intervals and t ratios. Our simulation results show that the t ratios for nonstationary processes converge to a standard normal distribution more slowly than those for stationary processes. Further, the confidence intervals are shown to work reasonably well in moderately large samples, but they display unsatisfactory performance at small sample sizes.


2012 ◽  
Vol 60 (1) ◽  
pp. 109-113 ◽  
Author(s):  
M Ershadul Haque ◽  
Jafar A Khan

Classical inference considers sampling variability to be the only source of uncertainty, and does not address the issue of bias caused by contamination. Naive robust intervals replace the classical estimates by their robust counterparts without considering the possible bias of the robust point estimates. Consequently, the asymptotic coverage proportion of these intervals of any nominal level will invariably tend to zero for any proportion of contamination.In this study, we attempt to achieve reasonable coverage percentages by constructing globally robust confidence intervals that adjust for the bias of the robust point estimates. We improve these globally robust intervals by considering the direction of the bias of the robust estimates used. We compare the proposed intervals with the existing ones through an extensive simulation study. The proposed methods have reasonable coverage percentage while the existing method show very poor coverage as sample size increases.DOI: http://dx.doi.org/10.3329/dujs.v60i1.10347  Dhaka Univ. J. Sci. 60(1): 109-113 2012 (January)


Author(s):  
Yuping Song ◽  
Weijie Hou ◽  
Shengyi Zhou

Abstract This paper discusses Nadaraya-Watson estimators for the unknown coefficients in second-order diffusion model with jumps constructed with Gamma asymmetric kernels. Compared with existing nonparametric estimators constructed with Gaussian symmetric kernels, local constant smoothing using Gamma asymmetric kernels possesses some extra advantages such as boundary bias correction, variance reduction and resistance to sparse design points, which is validated through theoretical details and finite sample simulation study. Under the regular conditions, the weak consistency and the asymptotic normality of these estimators are presented. Finally, the statistical advantages of the nonparametric estimators are depicted through 5-minute high-frequency data from Shenzhen Stock Exchange in China.


2015 ◽  
Vol 26 (6) ◽  
pp. 2543-2551 ◽  
Author(s):  
Hong Zhu ◽  
Song Zhang ◽  
Chul Ahn

Split-mouth designs are frequently used in dental clinical research, where a mouth is divided into two or more experimental segments that are randomly assigned to different treatments. It has the distinct advantage of removing a lot of inter-subject variability from the estimated treatment effect. Methods of statistical analyses for split-mouth design have been well developed. However, little work is available on sample size consideration at the design phase of a split-mouth trial, although many researchers pointed out that the split-mouth design can only be more efficient than a parallel-group design when within-subject correlation coefficient is substantial. In this paper, we propose to use the generalized estimating equation (GEE) approach to assess treatment effect in split-mouth trials, accounting for correlations among observations. Closed-form sample size formulas are introduced for the split-mouth design with continuous and binary outcomes, assuming exchangeable and “nested exchangeable” correlation structures for outcomes from the same subject. The statistical inference is based on the large sample approximation under the GEE approach. Simulation studies are conducted to investigate the finite-sample performance of the GEE sample size formulas. A dental clinical trial example is presented for illustration.


2010 ◽  
Vol 37 (2) ◽  
pp. 907-920 ◽  
Author(s):  
Ted W. Way ◽  
Berkman Sahiner ◽  
Lubomir M. Hadjiiski ◽  
Heang-Ping Chan

Sign in / Sign up

Export Citation Format

Share Document