scholarly journals Multiple Imputation of Multilevel Missing Data

SAGE Open ◽  
2016 ◽  
Vol 6 (4) ◽  
pp. 215824401666822 ◽  
Author(s):  
Simon Grund ◽  
Oliver Lüdtke ◽  
Alexander Robitzsch

The treatment of missing data can be difficult in multilevel research because state-of-the-art procedures such as multiple imputation (MI) may require advanced statistical knowledge or a high degree of familiarity with certain statistical software. In the missing data literature, pan has been recommended for MI of multilevel data. In this article, we provide an introduction to MI of multilevel missing data using the R package pan, and we discuss its possibilities and limitations in accommodating typical questions in multilevel research. To make pan more accessible to applied researchers, we make use of the mitml package, which provides a user-friendly interface to the pan package and several tools for managing and analyzing multiply imputed data sets. We illustrate the use of pan and mitml with two empirical examples that represent common applications of multilevel models, and we discuss how these procedures may be used in conjunction with other software.

Author(s):  
Simon Grund ◽  
Oliver Lüdtke ◽  
Alexander Robitzsch

AbstractMultilevel models often include nonlinear effects, such as random slopes or interaction effects. The estimation of these models can be difficult when the underlying variables contain missing data. Although several methods for handling missing data such as multiple imputation (MI) can be used with multilevel data, conventional methods for multilevel MI often do not properly take the nonlinear associations between the variables into account. In the present paper, we propose a sequential modeling approach based on Bayesian estimation techniques that can be used to handle missing data in a variety of multilevel models that involve nonlinear effects. The main idea of this approach is to decompose the joint distribution of the data into several parts that correspond to the outcome and explanatory variables in the intended analysis, thus generating imputations in a manner that is compatible with the substantive analysis model. In three simulation studies, we evaluate the sequential modeling approach and compare it with conventional as well as other substantive-model-compatible approaches to multilevel MI. We implemented the sequential modeling approach in the R package and provide a worked example to illustrate its application.


Author(s):  
Matthew Carlucci ◽  
Algimantas Kriščiūnas ◽  
Haohan Li ◽  
Povilas Gibas ◽  
Karolis Koncevičius ◽  
...  

Abstract Motivation Biological rhythmicity is fundamental to almost all organisms on Earth and plays a key role in health and disease. Identification of oscillating signals could lead to novel biological insights, yet its investigation is impeded by the extensive computational and statistical knowledge required to perform such analysis. Results To address this issue, we present DiscoRhythm (Discovering Rhythmicity), a user-friendly application for characterizing rhythmicity in temporal biological data. DiscoRhythm is available as a web application or an R/Bioconductor package for estimating phase, amplitude, and statistical significance using four popular approaches to rhythm detection (Cosinor, JTK Cycle, ARSER, and Lomb-Scargle). We optimized these algorithms for speed, improving their execution times up to 30-fold to enable rapid analysis of -omic-scale datasets in real-time. Informative visualizations, interactive modules for quality control, dimensionality reduction, periodicity profiling, and incorporation of experimental replicates make DiscoRhythm a thorough toolkit for analyzing rhythmicity. Availability and Implementation The DiscoRhythm R package is available on Bioconductor (https://bioconductor.org/packages/DiscoRhythm), with source code available on GitHub (https://github.com/matthewcarlucci/DiscoRhythm) under a GPL-3 license. The web application is securely deployed over HTTPS (https://disco.camh.ca) and is freely available for use worldwide. Local instances of the DiscoRhythm web application can be created using the R package or by deploying the publicly available Docker container (https://hub.docker.com/r/mcarlucci/discorhythm). Supplementary information Supplementary data are available at Bioinformatics online.


2019 ◽  
Vol 6 (339) ◽  
pp. 73-98
Author(s):  
Małgorzata Aleksandra Misztal

The problem of incomplete data and its implications for drawing valid conclusions from statistical analyses is not related to any particular scientific domain, it arises in economics, sociology, education, behavioural sciences or medicine. Almost all standard statistical methods presume that every object has information on every variable to be included in the analysis and the typical approach to missing data is simply to delete them. However, this leads to ineffective and biased analysis results and is not recommended in the literature. The state of the art technique for handling missing data is multiple imputation. In the paper, some selected multiple imputation methods were taken into account. Special attention was paid to using principal components analysis (PCA) as an imputation method. The goal of the study was to assess the quality of PCA‑based imputations as compared to two other multiple imputation techniques: multivariate imputation by chained equations (MICE) and missForest. The comparison was made by artificially simulating different proportions (10–50%) and mechanisms of missing data using 10 complete data sets from the UCI repository of machine learning databases. Then, missing values were imputed with the use of MICE, missForest and the PCA‑based method (MIPCA). The normalised root mean square error (NRMSE) was calculated as a measure of imputation accuracy. On the basis of the conducted analyses, missForest can be recommended as a multiple imputation method providing the lowest rates of imputation errors for all types of missingness. PCA‑based imputation does not perform well in terms of accuracy.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Lélia Polit ◽  
Gwenneg Kerdivel ◽  
Sebastian Gregoricchio ◽  
Michela Esposito ◽  
Christel Guillouf ◽  
...  

Abstract Background Multiple studies rely on ChIP-seq experiments to assess the effect of gene modulation and drug treatments on protein binding and chromatin structure. However, most methods commonly used for the normalization of ChIP-seq binding intensity signals across conditions, e.g., the normalization to the same number of reads, either assume a constant signal-to-noise ratio across conditions or base the estimates of correction factors on genomic regions with intrinsically different signals between conditions. Inaccurate normalization of ChIP-seq signal may, in turn, lead to erroneous biological conclusions. Results We developed a new R package, CHIPIN, that allows normalizing ChIP-seq signals across different conditions/samples when spike-in information is not available, but gene expression data are at hand. Our normalization technique is based on the assumption that, on average, no differences in ChIP-seq signals should be observed in the regulatory regions of genes whose expression levels are constant across samples/conditions. In addition to normalizing ChIP-seq signals, CHIPIN provides as output a number of graphs and calculates statistics allowing the user to assess the efficiency of the normalization and qualify the specificity of the antibody used. In addition to ChIP-seq, CHIPIN can be used without restriction on open chromatin ATAC-seq or DNase hypersensitivity data. We validated the CHIPIN method on several ChIP-seq data sets and documented its superior performance in comparison to several commonly used normalization techniques. Conclusions The CHIPIN method provides a new way for ChIP-seq signal normalization across conditions when spike-in experiments are not available. The method is implemented in a user-friendly R package available on GitHub: https://github.com/BoevaLab/CHIPIN


2002 ◽  
Vol 2 (1) ◽  
pp. 51-57 ◽  
Author(s):  
I Gusti Ngurah Darmawan

Evaluation studies often lack sophistication in their statistical analyses, particularly where there are small data sets or missing data. Until recently, the methods used for analysing incomplete data focused on removing the missing values, either by deleting records with incomplete information or by substituting the missing values with estimated mean scores. These methods, though simple to implement, are problematic. However, recent advances in theoretical and computational statistics have led to more flexible techniques with sound statistical bases. These procedures involve multiple imputation (MI), a technique in which the missing values are replaced by m > 1 estimated values, where m is typically small (e.g. 3-10). Each of the resultant m data sets is then analysed by standard methods, and the results are combined to produce estimates and confidence intervals that incorporate missing data uncertainty. This paper reviews the key ideas of multiple imputation, discusses the currently available software programs relevant to evaluation studies, and demonstrates their use with data from a study of the adoption and implementation of information technology in Bali, Indonesia.


2021 ◽  
Author(s):  
Trenton J. Davis ◽  
Tarek R. Firzli ◽  
Emily A. Higgins Keppler ◽  
Matt Richardson ◽  
Heather D. Bean

Missing data is a significant issue in metabolomics that is often neglected when conducting data pre-processing, particularly when it comes to imputation. This can have serious implications for downstream statistical analyses and lead to misleading or uninterpretable inferences. In this study, we aim to identify the primary types of missingness that affect untargeted metab-olomics data and compare strategies for imputation using two real-world comprehensive two-dimensional gas chromatog-raphy (GC×GC) data sets. We also present these goals in the context of experimental replication whereby imputation is con-ducted in a within-replicate-based fashion—the first description and evaluation of this strategy—and introduce an R package MetabImpute to carry out these analyses. Our results conclude that, in these two data sets, missingness was most likely of the missing at-random (MAR) and missing not-at-random (MNAR) types as opposed to missing completely at-random (MCAR). Gibbs sampler imputation and Random Forest gave the best results when imputing MAR and MNAR compared against single-value imputation (zero, minimum, mean, median, and half-minimum) and other more sophisticated approach-es (Bayesian principal components analysis and quantile regression imputation for left-censored data). When samples are replicated, within-replicate imputation approaches led to an increase in the reproducibility of peak quantification compared to imputation that ignores replication, suggesting that imputing with respect to replication may preserve potentially im-portant features in downstream analyses for biomarker discovery.


2020 ◽  
Vol 2 (7A) ◽  
Author(s):  
Vicki Springthorpe ◽  
Rosalyn Leaman ◽  
Despoina Sifouna ◽  
Joyce Bennett ◽  
Gavin Thomas

With continuing improvements and reducing costs of high-throughput technologies, microbiologists are increasingly collecting multi-omics datasets. However, the tools and techniques used to analyse these kinds of data are often highly specialised and require bioinformatics, statistics and often coding experience. Many studies also tend to report on a single aspect of the data whilst overlooking other potentially interesting phenomena. Consequently, many of these multi-omics data sets are not being used to their full potential. MORF was created as a solution to these problems by providing access to multi-omics datasets through an online interface which presents the data in a user-friendly and accessible way. No coding experience or specialist statistical knowledge is required, and users are free to explore the data using interactive graphics and simple analysis tools. Here we demonstrate MORF using multi-omics datasets from two experiments using bacteria in industrial fermentation processes. First, Escherichia coli engineered to produce styrene, a valuable chemical used in the manufacture of polymers, and secondly a Clostridium which produces the biofuel butanol. A key outcome was the identification of targets believed to be involved in responding to membrane stress, which we identified using MORF’s differential gene and protein analysis tools. Work is underway to further characterise and engineer these targets to improve product yields. In conclusion, MORF provides a framework for omics analysis that can be applied to any organism or set of experimental conditions, and will help researchers and collaborators to make the most of their data.


Author(s):  
Henrik Baktoft ◽  
Karl Ø. Gjelland ◽  
Finn Økland ◽  
Jennifer S. Rehage ◽  
Jonathan R. Rodemann ◽  
...  

AbstractThe R package yaps was introduced in 2017 as a transparent open source alternative to closed source manufacturer-provided solutions to estimate positions of fish (and other aquatic animals) tagged with acoustic transmitters.Although yaps is open source and transparent, the process from raw detections to final tracks has proved to be challenging for many potential users, effectively preventing most users from accessing the benefits of using yaps. Especially, the very important process of synchronizing the hydrophone arrays have proven to be an obstacle for many potential users.To make yaps more approachable to the wider fish tracking community, we have developed and added user-friendly functions assisting users in the complex process of synchronizing the data.Here, we introduce these functions and a six-step protocol intended to provide users with an example workflow that can be used as a template enabling users to apply yaps to their own data. Using example data collected by an array of Vemco VR2 hydrophones, the protocol walks the user through the entire process from raw data to final tracks. Example data sets and complete code for reproducing results are provided.


2020 ◽  
Author(s):  
KI-Hun Kim ◽  
Kwang-Jae Kim

BACKGROUND A lifelogs-based wellness index (LWI) is a function to calculate wellness scores from health behavior lifelogs such as daily walking steps and sleep time collected through smartphones. A wellness score intuitively shows a user of a smart wellness service the overall condition of health behaviors. LWI development includes LWI estimation (i.e., estimating coefficients in LWI with data). A panel data set of health behavior lifelogs allows LWI estimation to control for variables unobserved in LWI and hence to be less biased. Such panel data sets are likely to have missing data due to various random events of daily life (e.g., smart devices stop collecting data when they are out of batteries). Missing data can introduce the biases to LWI coefficients. Thus, the choice of appropriate missing data handling method is important to reduce the biases in LWI estimation with a panel data set of health behavior lifelogs. However, relevant studies are scarce in the literature. OBJECTIVE This research aims to identify a suitable missing data handling method for LWI estimation with panel data. Six representative missing data handling methods (i.e., listwise deletion (LD), mean imputation, Expectation-Maximization (EM) based multiple imputation, Predictive-Mean Matching (PMM) based multiple imputation, k-Nearest Neighbors (k-NN) based imputation, and Low-rank Approximation (LA) based imputation) are comparatively evaluated through the simulation of an existing LWI development case. METHODS A panel data set of health behavior lifelogs collected in the existing LWI development case was transformed into a reference data set. 200 simulated data sets were generated by randomly introducing missing data to the reference data set at each of missingness proportions from 1% to 80%. The six methods were applied to transform the simulated data sets into complete data sets by handling missing data. Coefficients in a linear LWI, a linear function, were estimated with each of all the complete data sets by following the case. Coefficient biases of the six methods were calculated by comparing the estimated coefficient values with reference values estimated with the reference data set. RESULTS Based on the coefficient biases, the superior methods changed according to the missingness proportion: LA based imputation, PMM based multiple imputation, and EM based multiple imputation for 1% to 30% missingness proportions; LA based imputation and PMM based multiple imputation for 31% to 60%; and only LA based imputation for over 60%. CONCLUSIONS LA based imputation was superior among the six methods regardless of the missingness proportion. This superiority is generalizable for other panel data sets of health behavior lifelogs because existing works have verified their low-rank nature where LA based imputation works well. This result will guide the missing data handling to reduce the coefficient biases in new development cases of linear LWIs with panel data.


2017 ◽  
Vol 21 (1) ◽  
pp. 111-149 ◽  
Author(s):  
Simon Grund ◽  
Oliver Lüdtke ◽  
Alexander Robitzsch

Sign in / Sign up

Export Citation Format

Share Document