The Case-time-control Method for Nonbinary Exposures

2017 ◽  
Vol 47 (1) ◽  
pp. 182-211 ◽  
Author(s):  
Arvid Sjölander

A popular way to reduce confounding in observational studies is to use each study participant as his or her own control. This is possible when both the exposure and the outcome are time varying and have been measured at several time points for each individual. The case-time-control method is a special case, which, under certain assumptions, allows the analyst to control for confounding by time-varying covariates, while controlling for all time-stationary characteristics of the study participants. There are two formulations of the case-time-control method. One formulation requires that the exposure be binary, and the other requires that there be no more than two time points per individual. In this article the author proposes a generalization of the case-time-control method for nonbinary exposures and an arbitrary number of time points. The author derives the asymptotic properties of the resulting estimator and assesses its finite sample properties in a simulation study.

Author(s):  
Sudipta Das ◽  
Anup Dewanji ◽  
Subrata Kundu

The process of software testing usually involves the correction of a detected bug immediately upon detection. In this article, in contrast, we discuss continuous time testing of a software with periodic debugging in which bugs are corrected, instead of at the instants of their detection, at some pre-specified time points. Under the assumption of renewal distribution for the time between successive occurrence of a bug, maximum-likelihood estimation of the initial number of bugs in the software is considered, when the renewal distribution belongs to any general parametric family or is arbitrary. The asymptotic properties of the estimated model parameters are also discussed. Finally, we investigate the finite sample properties of the estimators, specially that of the number of initial number of bugs, through simulation.


2019 ◽  
Vol 49 (1) ◽  
pp. 349-365
Author(s):  
Arvid Sjölander ◽  
Yang Ning

The case-time-control design is a tool to control for measured, time-varying covariates that increase montonically in time within each subject while also controlling for all unmeasured covariates that are constant within each subject across time. Until recently, the design was restricted to data with only two timepoints and a single binary covariate, or data with a binary exposure. Sjölander (2017) made an important extension that allows for an arbitrary number of timepoints and covariates and a nonbinary exposure. However, his estimation method requires fairly strong model assumptions, and it may create bias if these assumptions are violated. We propose a novel estimation method for the case-time-control design, which to a large extent relaxes the model assumptions in Sjölander. We show in simulations that this estimation method performs well under a range of scenarios and gives consistent estimates when Sjölander’s estimation does not.


2019 ◽  
Vol 7 (1) ◽  
pp. 394-417
Author(s):  
Aboubacrène Ag Ahmad ◽  
El Hadji Deme ◽  
Aliou Diop ◽  
Stéphane Girard

AbstractWe introduce a location-scale model for conditional heavy-tailed distributions when the covariate is deterministic. First, nonparametric estimators of the location and scale functions are introduced. Second, an estimator of the conditional extreme-value index is derived. The asymptotic properties of the estimators are established under mild assumptions and their finite sample properties are illustrated both on simulated and real data.


1998 ◽  
Vol 14 (2) ◽  
pp. 161-186 ◽  
Author(s):  
Laurence Broze ◽  
Olivier Scaillet ◽  
Jean-Michel Zakoïan

We discuss an estimation procedure for continuous-time models based on discrete sampled data with a fixed unit of time between two consecutive observations. Because in general the conditional likelihood of the model cannot be derived, an indirect inference procedure following Gouriéroux, Monfort, and Renault (1993, Journal of Applied Econometrics 8, 85–118) is developed. It is based on simulations of a discretized model. We study the asymptotic properties of this “quasi”-indirect estimator and examine some particular cases. Because this method critically depends on simulations, we pay particular attention to the appropriate choice of the simulation step. Finally, finite-sample properties are studied through Monte Carlo experiments.


2013 ◽  
Vol 5 (2) ◽  
pp. 133-162 ◽  
Author(s):  
Eric Hillebrand ◽  
Marcelo C. Medeiros ◽  
Junyue Xu

Abstract: We derive asymptotic properties of the quasi-maximum likelihood estimator of smooth transition regressions when time is the transition variable. The consistency of the estimator and its asymptotic distribution are examined. It is shown that the estimator converges at the usual -rate and has an asymptotically normal distribution. Finite sample properties of the estimator are explored in simulations. We illustrate with an application to US inflation and output data.


2001 ◽  
Vol 9 (4) ◽  
pp. 379-384 ◽  
Author(s):  
Ethan Katz

Fixed-effects logit models can be useful in panel data analysis, when N units have been observed for T time periods. There are two main estimators for such models: unconditional maximum likelihood and conditional maximum likelihood. Judged on asymptotic properties, the conditional estimator is superior. However, the unconditional estimator holds several practical advantages, and therefore I sought to determine whether its use could be justified on the basis of finite-sample properties. In a series of Monte Carlo experiments for T < 20, I found a negligible amount of bias in both estimators when T ≥ 16, suggesting that a researcher can safely use either estimator under such conditions. When T < 16, the conditional estimator continued to have a very small amount of bias, but the unconditional estimator developed more bias as T decreased.


2016 ◽  
Vol 33 (4) ◽  
pp. 791-838 ◽  
Author(s):  
Ulrich Hounyo ◽  
Sílvia Gonçalves ◽  
Nour Meddahi

The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach, where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre-averaged returns implies that the leading martingale part in the pre-averaged returns arekn-dependent withkngrowing slowly with the sample sizen. This motivates the application of a blockwise bootstrap method. We show that the “blocks of blocks” bootstrap method is not valid when volatility is time-varying. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure that combines the wild bootstrap with the blocks of blocks bootstrap. We provide a proof of the first order asymptotic validity of this method for percentile and percentile-tintervals. Our Monte Carlo simulations show that the wild blocks of blocks bootstrap improves the finite sample properties of the existing first order asymptotic theory. We use empirical work to illustrate its use in practice.


2009 ◽  
Vol 25 (1) ◽  
pp. 117-161 ◽  
Author(s):  
Marcelo C. Medeiros ◽  
Alvaro Veiga

In this paper a flexible multiple regime GARCH(1,1)-type model is developed to describe the sign and size asymmetries and intermittent dynamics in financial volatility. The results of the paper are important to other nonlinear GARCH models. The proposed model nests some of the previous specifications found in the literature and has the following advantages. First, contrary to most of the previous models, more than two limiting regimes are possible, and the number of regimes is determined by a simple sequence of tests that circumvents identification problems that are usually found in nonlinear time series models. The second advantage is that the novel stationarity restriction on the parameters is relatively weak, thereby allowing for rich dynamics. It is shown that the model may have explosive regimes but can still be strictly stationary and ergodic. A simulation experiment shows that the proposed model can generate series with high kurtosis and low first-order autocorrelation of the squared observations and exhibit the so-called Taylor effect, even with Gaussian errors. Estimation of the parameters is addressed, and the asymptotic properties of the quasi-maximum likelihood estimator are derived under weak conditions. A Monte-Carlo experiment is designed to evaluate the finite-sample properties of the sequence of tests. Empirical examples are also considered.


Econometrics ◽  
2019 ◽  
Vol 7 (3) ◽  
pp. 29
Author(s):  
Emanuela Ciapanna ◽  
Marco Taboga

This paper deals with instability in regression coefficients. We propose a Bayesian regression model with time-varying coefficients (TVC) that allows to jointly estimate the degree of instability and the time-path of the coefficients. Thanks to the computational tractability of the model and to the fact that it is fully automatic, we are able to run Monte Carlo experiments and analyze its finite-sample properties. We find that the estimation precision and the forecasting accuracy of the TVC model compare favorably to those of other methods commonly employed to deal with parameter instability. A distinguishing feature of the TVC model is its robustness to mis-specification: Its performance is also satisfactory when regression coefficients are stable or when they experience discrete structural breaks. As a demonstrative application, we used our TVC model to estimate the exposures of S&P 500 stocks to market-wide risk factors: We found that a vast majority of stocks had time-varying exposures and the TVC model helped to better forecast these exposures.


2016 ◽  
Vol 5 (1) ◽  
Author(s):  
Stijn Vansteelandt ◽  
Arvid Sjolander

AbstractMarginal Structural Models (MSMs), with the associated method of inverse probability weighting (IPW), have become increasingly popular in epidemiology to model and estimate the joint effects of a sequence of exposures. This popularity is largely related to the relative simplicity of the method, as compared to other techniques to adjust for time-varying confounding, such as g-estimation and g-computation. However, the price to pay for this simplicity can be substantial. The IPW estimators that are routinely used in applications make inefficient use of the information in the data, and are susceptible to large finite-sample bias when some confounders are strongly predictive of exposure. Moreover, the handling of continuous exposures easily becomes impractical, and the study of effect modification by time-varying covariates even impossible. In view of this, we revisit Structural Nested Mean Models (SNMMs) with the associated method of g-estimation as a useful remedy, and show how this can be implemented through standard software.


Sign in / Sign up

Export Citation Format

Share Document