IPED2X: A robust pedigree reconstruction algorithm for complicated pedigrees

2014 ◽  
Vol 12 (06) ◽  
pp. 1442007 ◽  
Author(s):  
Dan He ◽  
Eleazar Eskin

Reconstruction of family trees, or pedigree reconstruction, for a group of individuals is a fundamental problem in genetics. Some recent methods have been developed to reconstruct pedigrees using genotype data only. These methods are accurate and efficient for simple pedigrees which contain only siblings, where two individuals share the same pair of parents. A most recent method IPED2 is able to handle complicated pedigrees with half-sibling relationships, where two individuals share only one parent. However, the method is shown to miss many true positive half-sibling relationships as it removes all suspicious half-sibling relationships during the parent construction process. In this work, we propose a novel method IPED2X, which deploys a more robust algorithm for parent construction in the pedigrees by considering more possible operations rather than simple deletion. We convert the parent construction problem into a graph labeling problem and propose a more effective labeling algorithm. We show in our experiments that IPED2X is more powerful on capturing the true half-sibling relationships, which further leads to better reconstruction accuracy.

1991 ◽  
Vol 36 (5) ◽  
pp. 413-413
Author(s):  
Elizabeth A. Wehner ◽  
Wyndol Furman

2020 ◽  
pp. 147592172094500
Author(s):  
Haode Huo ◽  
Jingjing He ◽  
Xuefei Guan

This study presents a novel method for composite damage identification using Lamb wave. A probabilistic integration of the elliptical loci method and the RAPID (reconstruction algorithm for probabilistic inspection of defects) in a Bayesian framework is proposed. The proposed method allows for the incorporation of multiple damage sensitive features in a rational manner to improve the reliability and robustness for a given array of sensors. Numerical studies are performed to verify the effectiveness of the proposed method and to compare its accuracy with existing methods. Experimental investigation using a realistic composite plate is made to further validate the proposed method. The influence of damage location and the number of participating sensors on the performance of the proposed method is discussed. Results indicate that the proposed method yields more accurate and reliable results comparing with existing methods.


2014 ◽  
Vol 553 ◽  
pp. 564-569
Author(s):  
Yaseen Unnisa ◽  
Danh Tran ◽  
Fu Chun Huang

Independent Component Analysis (ICA) is a recent method of blind source separation, it has been employed in medical image processing and structural damge detection. It can extract source signals and the unmixing matrix of the system using mixture signals only. This novel method relies on the assumption that source signals are statistically independent. This paper looks at various measures of statistical independence (SI) employed in ICA, the measures proposed by Bakirov and his associates, and the effects of levels of SI of source signals on the output of ICA. Firstly, two statistical independent signals in the form of uniform random signals and a mixing matrix were used to simulate mixture signals to be anlysed byfastICApackage, secondly noise was added onto the signals to investigate effects of levels of SI on the output of ICA in the form of soure signals, the mixing and unmixing matrix. It was found that for p-value given by Bakirov’s SI statistical testing of the null hypothesis H0is a good indication of the SI between two variables and that for p-value larger than 0.05, fastICA performs satisfactorily.


Author(s):  
Mirosław Pawlak ◽  
Gurmukh Singh Panesar ◽  
Marcin Korytkowski

AbstractIn this paper we propose a novel method for invariant image reconstruction with the properly selected degree of symmetry. We make use of Zernike radial moments to represent an image due to their invariance properties to isometry transformations and the ability to uniquely represent the salient features of the image. The regularized ridge regression estimation strategy under symmetry constraints for estimating Zernike moments is proposed. This extended regularization problem allows us to enforces the bilateral symmetry in the reconstructed object. This is achieved by the proper choice of two regularization parameters controlling the level of reconstruction accuracy and the acceptable degree of symmetry. As a byproduct of our studies we propose an algorithm for estimating an angle of the symmetry axis which in turn is used to determine the possible asymmetry present in the image. The proposed image recovery under the symmetry constraints model is tested in a number of experiments involving image reconstruction and symmetry estimation.


2021 ◽  
Vol 22 (2) ◽  
Author(s):  
Vinod Prasad

A fundamental problem in computational biology is to deal with circular patterns. The problem consists of finding the least certain length substrings of a pattern and its rotations in the database. In this paper, a novel method is presented to deal with circular patterns. The problem is solved using two incremental steps. First, an algorithm is provided that reports all substrings of a given linear pattern in an online text. Next, without losing efficiency, the algorithm is extended to process all circular rotations of the pattern. For a given pattern P of size M, and a text T of size N, the algorithm reports all locations in the text where a substring of Pc is found, where Pc is one of the rotations of P. For an alphabet size σ, using O(M) space, desired goals are achieved in an average O(MN/σ) time, which is O(N) for all patterns of length M ≤ σ. Traditional string processing algorithms make use of advanced data structures such as suffix trees and automaton. We show that basic data structures such as arrays can be used in the text processing algorithms without compromising the efficiency.


F1000Research ◽  
2018 ◽  
Vol 7 ◽  
pp. 1521 ◽  
Author(s):  
Brian C. Ross ◽  
James C. Costello

We previously published a method that infers chromosome conformation from images of fluorescently-tagged genomic loci, for the case when there are many loci labeled with each distinguishable color. Here we build on our previous work and improve the reconstruction algorithm to address previous limitations. We show that these improvements 1) increase the reconstruction accuracy and 2) allow the method to be used on large-scale problems involving several hundred labeled loci. Simulations indicate that full-chromosome reconstructions at 1/2 Mb resolution are possible using existing labeling and imaging technologies. The updated reconstruction code and the script files used for this paper are available at: https://github.com/heltilda/align3d.


2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Zhiwei Qiao ◽  
Gage Redler ◽  
Boris Epel ◽  
Howard Halpern

Purpose. The total variation (TV) minimization algorithm is an effective image reconstruction algorithm capable of accurately reconstructing images from sparse and/or noisy data. The TV model consists of two terms: a data fidelity term and a TV regularization term. Two constrained TV models, data divergence-constrained TV minimization (DDcTV) and TV-constrained data divergence minimization (TVcDM), have been successfully applied to computed tomography (CT) and electron paramagnetic resonance imaging (EPRI). In this work, we propose a new constrained TV model, a doubly constrained TV (dcTV) model, which has the potential to further improve the reconstruction accuracy for the two terms which are both of constraint forms. Methods. We perform an inverse crime study to validate the model and its Chambolle-Pock (CP) solver and characterize the performance of the dcTV-CP algorithm in the context of CT. To demonstrate the superiority of the dcTV model, we compare the convergence rate and the reconstruction accuracy with the DDcTV and TVcDM models via simulated data. Results and Conclusions. The performance-characterizing study shows that the dcTV-CP algorithm is an accurate and convergent algorithm, with the model parameters impacting the reconstruction accuracy and the algorithm parameters impacting the convergence path and rate. The comparison studies show that the dcTV-CP algorithm has a relatively fast convergence rate and can achieve higher reconstruction accuracy from sparse projections or noisy projections relative to the other two single-constrained TV models. The knowledge and insights gained in the work may be utilized in the application of the new model in other imaging modalities including divergence-beam CT, magnetic resonance imaging (MRI), positron emission tomography (PET), and EPRI.


Sign in / Sign up

Export Citation Format

Share Document