scholarly journals Left-censored dementia incidences in estimating cohort effects

Author(s):  
Rafael Weißbach ◽  
Yongdai Kim ◽  
Achim Dörre ◽  
Anne Fink ◽  
Gabriele Doblhammer

Abstract We estimate the dementia incidence hazard in Germany for the birth cohorts 1900 until 1954 from a simple sample of Germany’s largest health insurance company. Followed from 2004 to 2012, 36,000 uncensored dementia incidences are observed and further 200,000 right-censored insurants included. From a multiplicative hazard model we find a positive and linear trend in the dementia hazard over the cohorts. The main focus of the study is on 11,000 left-censored persons who have already suffered from the disease in 2004. After including the left-censored observations, the slope of the trend declines markedly due to Simpson’s paradox, left-censored persons are imbalanced between the cohorts. When including left-censoring, the dementia hazard increases differently for different ages, we consider omitted covariates to be the reason. For the standard errors from large sample theory, left-censoring requires an adjustment to the conditional information matrix equality.

2001 ◽  
Vol 17 (2) ◽  
pp. 451-470 ◽  
Author(s):  
Jeffrey M. Wooldridge

I provide a systematic treatment of the asymptotic properties of weighted M-estimators under standard stratified sampling. Simple, consistent asymptotic variance matrix estimators are proposed for a broad class of problems. When stratification is based on exogenous variables, I show that the usual, unweighted M-estimator is more efficient than the weighted estimator under a generalized conditional information matrix equality. Hausman tests for the exogeneity of the sampling scheme, including fully robust forms, are derived.


Dose-Response ◽  
2005 ◽  
Vol 3 (3) ◽  
pp. dose-response.0 ◽  
Author(s):  
Shyamal D. Peddada ◽  
Joseph K. Haseman

Regression models are routinely used in many applied sciences for describing the relationship between a response variable and an independent variable. Statistical inferences on the regression parameters are often performed using the maximum likelihood estimators (MLE). In the case of nonlinear models the standard errors of MLE are often obtained by linearizing the nonlinear function around the true parameter and by appealing to large sample theory. In this article we demonstrate, through computer simulations, that the resulting asymptotic Wald confidence intervals cannot be trusted to achieve the desired confidence levels. Sometimes they could underestimate the true nominal level and are thus liberal. Hence one needs to be cautious in using the usual linearized standard errors of MLE and the associated confidence intervals.


2020 ◽  
Vol 8 (2) ◽  
pp. 462-470
Author(s):  
Majid Hashempour ◽  
Mahdi Doostparast ◽  
Zohreh Pakdaman

This paper deals with systems consisting of independent and heterogeneous exponential components. Since failures of components may change lifetimes of surviving components because of load sharing, a linear trend for conditionally proportional hazard rates is considered. Estimates of parameters, both point and interval estimates, are derived on the basis of observed component failures for s(≥ 2) systems. Fisher information matrix of the available data is also obtained which can be used for studying asymptotic behaviour of estimates. The generalized likelihood ratio test is implemented for testing homogeneity of s systems. Illustrative examples are also given.


2021 ◽  
Author(s):  
Victoria Savalei ◽  
Yves Rosseel

This article provides an overview of different computational options for inference following normal theory maximum likelihood (ML) estimation in structural equation modeling (SEM) with incomplete normal and nonnormal data. Complete data are covered as a special case. These computational options include whether the information matrix is observed or expected, whether the observed information matrix is estimated numerically or using an analytic asymptotic approximation, and whether the information matrix and the outer product matrix of the score vector are evaluated at the saturated or at the structured estimates. A variety of different standard errors and robust test statistics become possible by varying these options. We review the asymptotic properties of these computational variations, and we show how to obtain them using lavaan in R. We hope that this article will encourage methodologists to study the impact of the available computational options on the performance of standard errors and test statistics in SEM.


1998 ◽  
Vol 28 (4) ◽  
pp. 871-879 ◽  
Author(s):  
M. MARCELIS ◽  
F. NAVARRO-MATEU ◽  
R. MURRAY ◽  
J.-P. SELTEN ◽  
J. VAN OS

Background. Urban birth is associated with later schizophrenia. This study examined whether this finding is diagnosis-specific and which individuals are most at risk.Methods. All live births recorded between 1942 and 1978 in any of the 646 Dutch municipalities were followed-up through the National Psychiatric Case Register for first psychiatric admission for psychosis between 1970 and 1992 (N=42115).Results. Urban birth was linearly associated with later schizophrenia (incidence rate ratio linear trend (IRR), 1·39; 95% confidence interval (95% CI), 1·36–1·42), affective psychosis (IRR, 1·18; 95% CI, 1·15–1·21) and other psychosis (IRR, 1·27; 95% CI, 1·24–1·30). Individuals born in the highest category of the three-level urban exposure were around twice as likely to develop schizophrenia. Associations were stronger for men and for individuals with early age of onset. The effect of urban birth was also stronger in the more recent birth cohorts.Conclusions. There are quantitative differences between diagnostic categories in the strength of the association between urban birth and later psychiatric disorder. High rates of psychosis in urban areas may be the result of environmental factors associated with urbanization, the effect of which appears to be increasing over successive generations.


Author(s):  
Marie Böhnstedt ◽  
Jutta Gampe ◽  
Hein Putter

AbstractMortality deceleration, or the slowing down of death rates at old ages, has been repeatedly investigated, but empirical studies of this phenomenon have produced mixed results. The scarcity of observations at the oldest ages complicates the statistical assessment of mortality deceleration, even in the parsimonious parametric framework of the gamma-Gompertz model considered here. The need for thorough verification of the ages at death can further limit the available data. As logistical constraints may only allow to validate survivors beyond a certain (high) age, samples may be restricted to a certain age range. If we can quantify the effects of the sample size and the age range on the assessment of mortality deceleration, we can make recommendations for study design. For that purpose, we propose applying the concept of the Fisher information and ideas from the theory of optimal design. We compute the Fisher information matrix in the gamma-Gompertz model, and derive information measures for comparing the performance of different study designs. We then discuss interpretations of these measures. The special case in which the frailty variance takes the value of zero and lies on the boundary of the parameter space is given particular attention. The changes in information related to varying sample sizes or age ranges are investigated for specific scenarios. The Fisher information also allows us to study the power of a likelihood ratio test to detect mortality deceleration depending on the study design. We illustrate these methods with a study of mortality among late nineteenth-century French-Canadian birth cohorts.


2015 ◽  
Vol 23 (2) ◽  
pp. 159-179 ◽  
Author(s):  
Gary King ◽  
Margaret E. Roberts

“Robust standard errors” are used in a vast array of scholarship to correct standard errors for model misspecification. However, when misspecification is bad enough to make classical and robust standard errors diverge, assuming that it is nevertheless not so bad as to bias everything else requires considerable optimism. And even if the optimism is warranted, settling for a misspecified model, with or without robust standard errors, will still bias estimators of all but a few quantities of interest. The resulting cavernous gap between theory and practice suggests that considerable gains in applied statistics may be possible. We seek to help researchers realize these gains via a more productive way to understand and use robust standard errors; a new general and easier-to-use “generalized information matrix test” statistic that can formally assess misspecification (based on differences between robust and classical variance estimates); and practical illustrations via simulations and real examples from published research. How robust standard errors are used needs to change, but instead of jettisoning this popular tool we show how to use it to provide effective clues about model misspecification, likely biases, and a guide to considerably more reliable, and defensible, inferences. Accompanying this article is software that implements the methods we describe.


2006 ◽  
Vol 136 (10) ◽  
pp. 3583-3613 ◽  
Author(s):  
Christophe Croux ◽  
Geert Dhaene ◽  
Dirk Hoorelbeke

Sign in / Sign up

Export Citation Format

Share Document