nominal coverage
Recently Published Documents


TOTAL DOCUMENTS

14
(FIVE YEARS 5)

H-INDEX

5
(FIVE YEARS 1)

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Daniele Bottigliengo ◽  
Ileana Baldi ◽  
Corrado Lanera ◽  
Giulia Lorenzoni ◽  
Jonida Bejko ◽  
...  

Abstract Background Propensity score matching is a statistical method that is often used to make inferences on the treatment effects in observational studies. In recent years, there has been widespread use of the technique in the cardiothoracic surgery literature to evaluate to potential benefits of new surgical therapies or procedures. However, the small sample size and the strong dependence of the treatment assignment on the baseline covariates that often characterize these studies make such an evaluation challenging from a statistical point of view. In such settings, the use of propensity score matching in combination with oversampling and replacement may provide a solution to these issues by increasing the initial sample size of the study and thus improving the statistical power that is needed to detect the effect of interest. In this study, we review the use of propensity score matching in combination with oversampling and replacement in small sample size settings. Methods We performed a series of Monte Carlo simulations to evaluate how the sample size, the proportion of treated, and the assignment mechanism affect the performances of the proposed approaches. We assessed the performances with overall balance, relative bias, root mean squared error and nominal coverage. Moreover, we illustrate the methods using a real case study from the cardiac surgery literature. Results Matching without replacement produced estimates with lower bias and better nominal coverage than matching with replacement when 1:1 matching was considered. In contrast to that, matching with replacement showed better balance, relative bias, and root mean squared error than matching without replacement for increasing levels of oversampling. The best nominal coverage was obtained by using the estimator that accounts for uncertainty in the matching procedure on sets of units obtained after matching with replacement. Conclusions The use of replacement provides the most reliable treatment effect estimates and that no more than 1 or 2 units from the control group should be matched to each treated observation. Moreover, the variance estimator that accounts for the uncertainty in the matching procedure should be used to estimate the treatment effect.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Brennan C. Kahan ◽  
Ian R. White ◽  
Sandra Eldridge ◽  
Richard Hooper

Abstract Background Re-randomisation trials involve re-enrolling and re-randomising patients for each new treatment episode they experience. They are often used when interest lies in the average effect of an intervention across all the episodes for which it would be used in practice. Re-randomisation trials are often analysed using independence estimators, where a working independence correlation structure is used. However, research into independence estimators in the context of re-randomisation has been limited. Methods We performed a simulation study to evaluate the use of independence estimators in re-randomisation trials. We focussed on a continuous outcome, and the setting where treatment allocation does not affect occurrence of subsequent episodes. We evaluated different treatment effect mechanisms (e.g. by allowing the treatment effect to vary across episodes, or to become less effective on re-use, etc), and different non-enrolment mechanisms (e.g. where patients who experience a poor outcome are less likely to re-enrol for their second episode). We evaluated four different independence estimators, each corresponding to a different estimand (per-episode and per-patient approaches, and added-benefit and policy-benefit approaches). Results We found that independence estimators were unbiased for the per-episode added-benefit estimand in all scenarios we considered. We found independence estimators targeting other estimands (per-patient or policy-benefit) were unbiased, except when there was differential non-enrolment between treatment groups (i.e. when different types of patients from each treatment group decide to re-enrol for subsequent episodes). We found the use of robust standard errors provided close to nominal coverage in all settings where the estimator was unbiased. Conclusions Careful choice of estimand can ensure re-randomisation trials are addressing clinically relevant questions. Independence estimators are a useful approach, and should be considered as the default estimator until the statistical properties of alternative estimators are thoroughly evaluated.


PLoS ONE ◽  
2021 ◽  
Vol 16 (5) ◽  
pp. e0252231
Author(s):  
Nathan J. Crum ◽  
Lisa C. Neyman ◽  
Timothy A. Gowan

Accurate and precise abundance estimation is vital for informed wildlife conservation and management decision-making. Line transect surveys are a common sampling approach for abundance estimation. Distance sampling is often used to estimate abundance from line transect survey data; however, search encounter spatial capture-recapture can also be used when individuals in the population of interest are identifiable. The search encounter spatial capture-recapture model has rarely been applied, and its performance has not been compared to that of distance sampling. We analyzed simulated datasets to compare the performance of distance sampling and spatial capture-recapture abundance estimators. Additionally, we estimated the abundance of North Atlantic right whales in the southeastern United States with two formulations of each model and compared the estimates. Spatial capture-recapture abundance estimates had lower root mean squared error than distance sampling estimates. Spatial capture-recapture 95% credible intervals for abundance had nominal coverage, i.e., contained the simulating value for abundance in 95% of simulations, whereas distance sampling credible intervals had below nominal coverage. Moreover, North Atlantic right whale abundance estimates from distance sampling models were more sensitive to model specification compared to spatial capture-recapture estimates. When estimating abundance from line transect data, researchers should consider using search encounter spatial capture-recapture when individuals in the population of interest are identifiable, when line transects are surveyed over multiple occasions, when there is imperfect detection of individuals located on the line transect, and when it is safe to assume the population of interest is closed demographically. When line transects are surveyed over multiple occasions, researchers should be aware that individual space use may induce spatial autocorrelation in counts across transects. This is not accounted for in common distance sampling estimators and leads to overly precise abundance estimates.


2020 ◽  
Author(s):  
Ozan Cinar ◽  
Shinichi Nakagawa ◽  
Wolfgang Viechtbauer

Meta-analyses in ecology and evolution typically include multiple estimates from the same study and based on multiple species. The resulting dependencies in the data can be addressed by using a phylogenetic multilevel meta-analysis model. However, the complexity of the model poses challenges for accurately estimating model parameter. We therefore carried out a simulation study to investigate the performance of models with different degrees of complexities. While the overall mean was estimated with little to no bias irrespective of the model, only the model that accounted for the multilevel structure and that incorporates both a non-phylogenetic and a phylogenetic variance component provided confidence intervals with approximately nominal coverage rates. We therefore suggest that meta-analysts in ecology and evolution use the phylogenetic multilevel meta-analysis model as the de facto standard when analyzing multi-species datasets.


2019 ◽  
Vol 11 (1) ◽  
pp. 193-224 ◽  
Author(s):  
Joel L. Horowitz

The bootstrap is a method for estimating the distribution of an estimator or test statistic by resampling one's data or a model estimated from the data. Under conditions that hold in a wide variety of econometric applications, the bootstrap provides approximations to distributions of statistics, coverage probabilities of confidence intervals, and rejection probabilities of hypothesis tests that are more accurate than the approximations of first-order asymptotic distribution theory. The reductions in the differences between true and nominal coverage or rejection probabilities can be very large. In addition, the bootstrap provides a way to carry out inference in certain settings where obtaining analytic distributional approximations is difficult or impossible. This article explains the usefulness and limitations of the bootstrap in contexts of interest in econometrics. The presentation is informal and expository. It provides an intuitive understanding of how the bootstrap works. Mathematical details are available in the references that are cited.


Filomat ◽  
2017 ◽  
Vol 31 (10) ◽  
pp. 2967-2974
Author(s):  
Vesna Rajic

We examine one-sided confidence intervals for the population variance, based on the ordinary t-statistics. We derive an unconditional coverage probability of the bootstrap-t interval for unknown variance. For that purpose, we find an Edgeworth expansion of the distribution of t-statistic to an order n-2. We can see that a number of simulation, B, has the influence on coverage probability of the confidence interval for the variance. If B equals sample size then coverage probability and its limit (when B ? ?) disagree at the level O(n-2). If we want that nominal coverage probability of the interval would be equal to ?, then coverage probability and its limit agree to order n-3/2 if B is of larger order than the square root of the sample size. We present a modeling application in insurance property, where the purpose of analysis is to measure variability of a data set.


Paleobiology ◽  
2016 ◽  
Vol 42 (2) ◽  
pp. 240-256 ◽  
Author(s):  
Steve C. Wang ◽  
Philip J. Everson ◽  
Heather Jianan Zhou ◽  
Dasol Park ◽  
David J. Chudzicki

AbstractNumerous methods exist for estimating the true stratigraphic range of a fossil taxon based on the stratigraphic positions of its fossil occurrences. Many of these methods require the assumption of uniform fossil recovery potential—that fossils are equally likely to be found at any point within the taxon's true range. This assumption is unrealistic, because factors such as stratigraphic architecture, sampling effort, and the taxon's abundance and geographic range affect recovery potential. Other methods do not make this assumption, but they instead require a priori quantitative knowledge of recovery potential that may be difficult to obtain. We present a new Bayesian method, the Adaptive Beta method, for estimating the true stratigraphic range of a taxon that works for both uniform and non-uniform recovery potential. In contrast to existing methods, we explicitly estimate recovery potential from the positions of the occurrences themselves, so that a priori knowledge of recovery potential is not required. Using simulated datasets, we compare the performance of our method with existing methods. We show that the Adaptive Beta method performs well in that it achieves or nearly achieves nominal coverage probabilities and provides reasonable point estimates of the true extinction in a variety of situations. We demonstrate the method using a dataset of the Cambrian molluscAnabarella.


2013 ◽  
Vol 30 (1) ◽  
pp. 176-200 ◽  
Author(s):  
Matias D. Cattaneo ◽  
Richard K. Crump ◽  
Michael Jansson

This paper proposes (apparently) novel standard error formulas for the density-weighted average derivative estimator of Powell, Stock, and Stoker (Econometrica 57, 1989). Asymptotic validity of the standard errors developed in this paper does not require the use of higher-order kernels, and the standard errors are “robust” in the sense that they accommodate (but do not require) bandwidths that are smaller than those for which conventional standard errors are valid. Moreover, the results of a Monte Carlo experiment suggest that the finite sample coverage rates of confidence intervals constructed using the standard errors developed in this papercoincide (approximately) with the nominal coverage rates across a nontrivial range of bandwidths.


Sign in / Sign up

Export Citation Format

Share Document