missing data treatments
Recently Published Documents


TOTAL DOCUMENTS

11
(FIVE YEARS 4)

H-INDEX

5
(FIVE YEARS 1)

2021 ◽  
Vol 11 (4) ◽  
pp. 1653-1687
Author(s):  
Alexander Robitzsch

Missing item responses are prevalent in educational large-scale assessment studies such as the programme for international student assessment (PISA). The current operational practice scores missing item responses as wrong, but several psychometricians have advocated for a model-based treatment based on latent ignorability assumption. In this approach, item responses and response indicators are jointly modeled conditional on a latent ability and a latent response propensity variable. Alternatively, imputation-based approaches can be used. The latent ignorability assumption is weakened in the Mislevy-Wu model that characterizes a nonignorable missingness mechanism and allows the missingness of an item to depend on the item itself. The scoring of missing item responses as wrong and the latent ignorable model are submodels of the Mislevy-Wu model. In an illustrative simulation study, it is shown that the Mislevy-Wu model provides unbiased model parameters. Moreover, the simulation replicates the finding from various simulation studies from the literature that scoring missing item responses as wrong provides biased estimates if the latent ignorability assumption holds in the data-generating model. However, if missing item responses are generated such that they can only be generated from incorrect item responses, applying an item response model that relies on latent ignorability results in biased estimates. The Mislevy-Wu model guarantees unbiased parameter estimates if the more general Mislevy-Wu model holds in the data-generating model. In addition, this article uses the PISA 2018 mathematics dataset as a case study to investigate the consequences of different missing data treatments on country means and country standard deviations. Obtained country means and country standard deviations can substantially differ for the different scaling models. In contrast to previous statements in the literature, the scoring of missing item responses as incorrect provided a better model fit than a latent ignorable model for most countries. Furthermore, the dependence of the missingness of an item from the item itself after conditioning on the latent response propensity was much more pronounced for constructed-response items than for multiple-choice items. As a consequence, scaling models that presuppose latent ignorability should be refused from two perspectives. First, the Mislevy-Wu model is preferred over the latent ignorable model for reasons of model fit. Second, in the discussion section, we argue that model fit should only play a minor role in choosing psychometric models in large-scale assessment studies because validity aspects are most relevant. Missing data treatments that countries can simply manipulate (and, hence, their students) result in unfair country comparisons.


Author(s):  
Alexander Robitzsch

Missing item responses are prevalent in educational large-scale assessment studies like the programme for international student assessment (PISA). The current operational practice scores missing item responses as wrong, but several psychometricians advocated a model-based treatment based on latent ignorability assumption. In this approach, item responses and response indicators are jointly modeled conditional on a latent ability and a latent response propensity variable. Alternatively, imputation-based approaches can be used. The latent ignorability assumption is weakened in the Mislevy-Wu model that characterizes a nonignorable missingness mechanism and allows the missingness of an item to depend on the item itself. The scoring of missing item responses as wrong and the latent ignorable model are submodels of the Mislevy-Wu model. This article uses the PISA 2018 mathematics dataset to investigate the consequences of different missing data treatments on country means. Obtained country means can substantially differ for the different scaling models. In contrast to previous statements in the literature, the scoring of missing item responses as incorrect provided a better model fit than a latent ignorable model for most countries. Furthermore, the dependence of the missingness of an item from the item itself after conditioning on the latent response propensity was much more pronounced for constructed-response items than for multiple-choice items. As a consequence, scaling models that presuppose latent ignorability should be refused from two perspectives. First, the Mislevy-Wu model is preferred over the latent ignorable model for reasons of model fit. Second, we argue that model fit should only play a minor role in choosing psychometric models in large-scale assessment studies because validity aspects are most relevant. Missing data treatments that countries can simply manipulate (and, hence, their students) result in unfair country comparisons.


2020 ◽  
Vol 49 (5) ◽  
pp. 1702-1711 ◽  
Author(s):  
Charlie Rioux ◽  
Antoine Lewin ◽  
Omolola A Odejimi ◽  
Todd D Little

Abstract Taking advantage of the ability of modern missing data treatments in epidemiological research (e.g. multiple imputation) to recover power while avoiding bias in the presence of data that is missing completely at random, planned missing data designs allow researchers to deliberately incorporate missing data into a research design. A planned missing data design may be done by randomly assigning participants to have missing items in a questionnaire (multiform design) or missing occasions of measurement in a longitudinal study (wave-missing design), or by administering an expensive gold-standard measure to a random subset of participants while the whole sample is administered a cheaper measure (two-method design). Although not common in epidemiology, these designs have been recommended for decades by methodologists for their benefits—notably that data collection costs are minimized and participant burden is reduced, which can increase validity. This paper describes the multiform, wave-missing and two-method designs, including their benefits, their impact on bias and power, and other factors that must be taken into consideration when implementing them in an epidemiological study design.


2019 ◽  
Vol 45 (1) ◽  
pp. 51-58 ◽  
Author(s):  
Charlie Rioux ◽  
Todd D. Little

Missing data are ubiquitous in studies examining preventive interventions. This missing data need to be handled appropriately for data analyses to yield unbiased results. After a brief discussion of missing data mechanisms, inappropriate missing data treatments and appropriate missing data treatments, we review the current state of missing data treatments in intervention studies as well as how they have evolved over the years. Although missing data treatments have improved over the years, antiquated missing data treatments associated with biased results are still prevalent. Furthermore, many studies do not appropriately report their rates of missing data and missing data treatments. Using appropriate missing data treatments is elemental to accurately identify effective preventive interventions and properly inform practice and policy.


2018 ◽  
Vol 18 (11) ◽  
pp. 2009-2017 ◽  
Author(s):  
Nathaniel T. Ondeck ◽  
Michael C. Fu ◽  
Laura A. Skrip ◽  
Ryan P. McLynn ◽  
Jonathan J. Cui ◽  
...  

Author(s):  
Fan Ye ◽  
Yong Wang

Data quality, including record inaccuracy and missingness (incompletely recorded crashes and crash underreporting), has always been of concern in crash data analysis. Limited efforts have been made to handle some specific aspects of crash data quality problems, such as using weights in estimation to take care of unreported crash data and applying multiple imputation (MI) to fill in missing information of drivers’ status of attention before crashes. Yet, there lacks a general investigation of the performance of different statistical methods to handle missing crash data. This paper is intended to explore and evaluate the performance of three missing data treatments, which are complete-case analysis (CC), inverse probability weighting (IPW) and MI, in crash severity modeling using the ordered probit model. CC discards those crash records with missing information on any of the variables; IPW includes weights in estimation to adjust for bias, using complete records’ probability of being a complete case; and MI imputes the missing values based on the conditional distribution of the variable with missing information on the observed data. Those missing data treatments provide varying performance in model estimations. Based on analysis of both simulated and real crash data, this paper suggests that the choice of an appropriate missing data treatment should be based on sample size and data missing rate. Meanwhile, it is recommended that MI is used for incompletely recorded crash data and IPW for unreported crashes, before applying crash severity models on crash data.


2016 ◽  
Vol 19 (3) ◽  
pp. 284-294 ◽  
Author(s):  
Kyle M. Lang ◽  
Todd D. Little

Sign in / Sign up

Export Citation Format

Share Document