scholarly journals Haplotype and Missing Data Inference in Nuclear Families

2004 ◽  
Vol 14 (8) ◽  
pp. 1624-1632 ◽  
Author(s):  
S. Lin
1999 ◽  
Vol 15 (2) ◽  
pp. 197-204 ◽  
Author(s):  
Deborah P. Lubeck ◽  
David J. Pasta ◽  
Scott C. Flanders ◽  
James M. Henning
Keyword(s):  

1988 ◽  
Vol 21 (4) ◽  
pp. 349-366 ◽  
Author(s):  
Kim M. Albridge ◽  
Jim Standish ◽  
James F. Fries
Keyword(s):  

Author(s):  
Cai Xu ◽  
Ziyu Guan ◽  
Wei Zhao ◽  
Hongchang Wu ◽  
Yunfei Niu ◽  
...  

Multi-view clustering aims to leverage information from multiple views to improve clustering. Most previous works assumed that each view has complete data. However, in real-world datasets, it is often the case that a view may contain some missing data, resulting in the incomplete multi-view clustering problem. Previous methods for this problem have at least one of the following drawbacks: (1) employing shallow models, which cannot well handle the dependence and discrepancy among different views; (2) ignoring the hidden information of the missing data; (3) dedicated to the two-view case. To eliminate all these drawbacks, in this work we present an Adversarial Incomplete Multi-view Clustering (AIMC) method. Unlike most existing methods which only learn a new representation with existing views, AIMC seeks the common latent space of multi-view data and performs missing data inference simultaneously. In particular, the element-wise reconstruction and the generative adversarial network (GAN) are integrated to infer the missing data. They aim to capture overall structure and get a deeper semantic understanding respectively. Moreover, an aligned clustering loss is designed to obtain a better clustering structure. Experiments conducted on three datasets show that AIMC performs well and outperforms baseline methods.


2019 ◽  
Author(s):  
Robert Kubinec

This paper presents an item-response theory parameterization of ideal points that unifies existing approaches to ideal point models while also extending them. For time-varying inference, the model permits ideal points to vary in a random walk, in a stationary autoregressive process, or in a semi-parametric Gaussian process. For missing data, the model implements a two-stage selection adjustment to account for non-ignorable missingness. In addition, the ideal point model is extended to handle new distributions, including continuous, positive-continuous and ordinal data. To enable modeling of datasets with mixed data (discrete and continuous), I incorporate joint modeling of different distributions. Finally, I also address ways of implementing Bayesian inference with big data sets, including variational inference and within-chain MCMC parallelization.


Sign in / Sign up

Export Citation Format

Share Document