Author(s):  
O.N. Pavlova ◽  
A.S. Abdurashitov ◽  
M.V. Ulanova ◽  
N.A. Shushunova ◽  
A.N. Pavlov

2020 ◽  
pp. 1471082X2092711
Author(s):  
Grigorios Papageorgiou ◽  
Dimitris Rizopoulos

Dropout is a common complication in longitudinal studies, especially since the distinction between missing not at random (MNAR) and missing at random (MAR) dropout is intractable. Consequently, one starts with an analysis that is valid under MAR and then performs a sensitivity analysis by considering MNAR departures from it. To this end, specific classes of joint models, such as pattern-mixture models (PMMs) and selection models (SeMs), have been proposed. On the contrary, shared-parameter models (SPMs) have received less attention, possibly because they do not embody a characterization of MAR. A few approaches to achieve MAR in SPMs exist, but are difficult to implement in existing software. In this article, we focus on SPMs for incomplete longitudinal and time-to-dropout data and propose an alternative characterization of MAR by exploiting the conditional independence assumption, under which outcome and missingness are independent given a set of random effects. By doing so, the censoring distribution can be utilized to cover a wide range of assumptions for the missing data mechanism on the subject-specific level. This approach offers substantial advantages over its counterparts and can be easily implemented in existing software. More specifically, it offers flexibility over the assumption for the missing data generating mechanism that governs dropout by allowing subject-specific perturbations of the censoring distribution, whereas in PMMs and SeMs dropout is considered MNAR strictly.


2020 ◽  
Vol 25 (8) ◽  
pp. 950-956 ◽  
Author(s):  
James A. Lumley ◽  
Gary Sharman ◽  
Thomas Wilkin ◽  
Matthew Hirst ◽  
Carlos Cobas ◽  
...  

Adequate characterization of chemical entities made for biological screening in the drug discovery context is critical. Incorrectly characterized structures lead to mistakes in the interpretation of structure–activity relationships and confuse an already multidimensional optimization problem. Mistakes in the later use of these compounds waste money and valuable resources in a discovery process already under cost pressure. Left unidentified, these errors lead to problems in project data packages during quality review. At worst, they put intellectual property and patent integrity at risk. We describe a KNIME workflow for the early and automated identification of these errors during registration of a new chemical entity into the corporate screening catalog. This Automated Structure Verification workflow provides early identification (within 24 hours) of missing or inconsistent analytical data and therefore reduces any mistakes that inevitably get made. Automated identification removes the burden of work from the chemist submitting the compound into the registration system. No additional work is required unless a problem is identified and the submitter alerted. Before implementation, 14% of samples within the existing sample catalog were missing data on initial pass. A year after implementation, only 0.2% were missing data.


2015 ◽  
Vol 49 (1) ◽  
pp. 146-154 ◽  
Author(s):  
Aaron B. Mendelsohn ◽  
Nancy A. Dreyer ◽  
Pattra W. Mattox ◽  
Zhaohui Su ◽  
Anna Swenson ◽  
...  

2010 ◽  
Author(s):  
Zia Nadir ◽  
Muhammad Idrees Ahmad ◽  
Sio-Iong Ao ◽  
Hideki Katagir ◽  
Li Xu ◽  
...  

Metabolomics ◽  
2018 ◽  
Vol 14 (10) ◽  
Author(s):  
Kieu Trinh Do ◽  
Simone Wahl ◽  
Johannes Raffler ◽  
Sophie Molnos ◽  
Michael Laimighofer ◽  
...  

Biometrika ◽  
2021 ◽  
Author(s):  
D Farewell ◽  
R Daniel ◽  
S Seaman

Abstract We offer a natural and extensible measure-theoretic treatment of missingness at random. Within the standard missing data framework, we give a novel characterization of the observed data as a stopping-set sigma algebra. We demonstrate that the usual missingness at random conditions are equivalent to requiring particular stochastic processes to be adapted to a set-indexed filtration. These measurability conditions ensure the usual factorization of likelihood ratios. We illustrate how the theory extends easily to incorporate explanatory variables, to describe longitudinal data in continuous time, and to admit more general coarsening of observations.


Sign in / Sign up

Export Citation Format

Share Document