A test of missing completely at random for generalised estimating equations with missing data

Biometrika ◽  
1999 ◽  
Vol 86 (1) ◽  
pp. 1-13 ◽  
Author(s):  
H. Chen
Biometrika ◽  
2016 ◽  
Vol 103 (1) ◽  
pp. 175-187 ◽  
Author(s):  
Jun Shao ◽  
Lei Wang

Abstract To estimate unknown population parameters based on data having nonignorable missing values with a semiparametric exponential tilting propensity, Kim & Yu (2011) assumed that the tilting parameter is known or can be estimated from external data, in order to avoid the identifiability issue. To remove this serious limitation on the methodology, we use an instrument, i.e., a covariate related to the study variable but unrelated to the missing data propensity, to construct some estimating equations. Because these estimating equations are semiparametric, we profile the nonparametric component using a kernel-type estimator and then estimate the tilting parameter based on the profiled estimating equations and the generalized method of moments. Once the tilting parameter is estimated, so is the propensity, and then other population parameters can be estimated using the inverse propensity weighting approach. Consistency and asymptotic normality of the proposed estimators are established. The finite-sample performance of the estimators is studied through simulation, and a real-data example is also presented.


Author(s):  
Roderick J. Little

I review assumptions about the missing-data mechanism that underlie methods for the statistical analysis of data with missing values. I describe Rubin's original definition of missing at random, (MAR), its motivation and criticisms, and his sufficient conditions for ignoring the missingness mechanism for likelihood-based, Bayesian, and frequentist inference. Related definitions, including missing completely at random, always MAR, always missing completely at random, and partially MAR are also covered. I present a formal argument for weakening Rubin's sufficient conditions for frequentist maximum likelihood inference with precision based on the observed information. Some simple examples of MAR are described, together with an example where the missingness mechanism can be ignored even though MAR does not hold. Alternative approaches to statistical inference based on the likelihood function are reviewed, along with non-likelihood frequentist approaches, including weighted generalized estimating equations. Connections with the causal inference literature are also discussed. Finally, alternatives to Rubin's MAR definition are discussed, including informative missingness, informative censoring, and coarsening at random. The intent is to provide a relatively nontechnical discussion, although some of the underlying issues are challenging and touch on fundamental questions of statistical inference. Expected final online publication date for the Annual Review of Statistics, Volume 8 is March 7, 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


2020 ◽  
Vol 49 (5) ◽  
pp. 1702-1711 ◽  
Author(s):  
Charlie Rioux ◽  
Antoine Lewin ◽  
Omolola A Odejimi ◽  
Todd D Little

Abstract Taking advantage of the ability of modern missing data treatments in epidemiological research (e.g. multiple imputation) to recover power while avoiding bias in the presence of data that is missing completely at random, planned missing data designs allow researchers to deliberately incorporate missing data into a research design. A planned missing data design may be done by randomly assigning participants to have missing items in a questionnaire (multiform design) or missing occasions of measurement in a longitudinal study (wave-missing design), or by administering an expensive gold-standard measure to a random subset of participants while the whole sample is administered a cheaper measure (two-method design). Although not common in epidemiology, these designs have been recommended for decades by methodologists for their benefits—notably that data collection costs are minimized and participant burden is reduced, which can increase validity. This paper describes the multiform, wave-missing and two-method designs, including their benefits, their impact on bias and power, and other factors that must be taken into consideration when implementing them in an epidemiological study design.


2010 ◽  
Vol 80 (7-8) ◽  
pp. 639-647 ◽  
Author(s):  
Huixiu Zhao ◽  
Wen-Qing Ma ◽  
Jianhua Guo

2020 ◽  
Vol 28 (108) ◽  
pp. 599-621
Author(s):  
Maria Eugénia Ferrão ◽  
Paula Prata ◽  
Maria Teresa Gonzaga Alves

Abstract Almost all quantitative studies in educational assessment, evaluation and educational research are based on incomplete data sets, which have been a problem for years without a single solution. The use of big identifiable data poses new challenges in dealing with missing values. In the first part of this paper, we present the state-of-art of the topic in the Brazilian education scientific literature, and how researchers have dealt with missing data since the turn of the century. Next, we use open access software to analyze real-world data, the 2017 Prova Brasil , for several federation units to document how the naïve assumption of missing completely at random may substantially affect statistical conclusions, researcher interpretations, and subsequent implications for policy and practice. We conclude with straightforward suggestions for any education researcher on applying R routines to conduct the hypotheses test of missing completely at random and, if the null hypothesis is rejected, then how to implement the multiple imputation, which appears to be one of the most appropriate methods for handling missing data.


Sign in / Sign up

Export Citation Format

Share Document