scholarly journals SWSPM: A Novel Alignment-Free DNA Comparison Method Based on Signal Processing Approaches

2019 ◽  
Vol 15 ◽  
pp. 117693431984907 ◽  
Author(s):  
Tomáš Farkaš ◽  
Jozef Sitarčík ◽  
Broňa Brejová ◽  
Mária Lucká

Computing similarity between 2 nucleotide sequences is one of the fundamental problems in bioinformatics. Current methods are based mainly on 2 major approaches: (1) sequence alignment, which is computationally expensive, and (2) faster, but less accurate, alignment-free methods based on various statistical summaries, for example, short word counts. We propose a new distance measure based on mathematical transforms from the domain of signal processing. To tolerate large-scale rearrangements in the sequences, the transform is computed across sliding windows. We compare our method on several data sets with current state-of-art alignment-free methods. Our method compares favorably in terms of accuracy and outperforms other methods in running time and memory requirements. In addition, it is massively scalable up to dozens of processing units without the loss of performance due to communication overhead. Source files and sample data are available at https://bitbucket.org/fiitstubioinfo/swspm/src

2021 ◽  
Author(s):  
Yang Young Lu ◽  
Yiwen Wang ◽  
Fang Zhang ◽  
Jiaxing Bai ◽  
Ying Wang

AbstractMotivationUnderstanding the phylogenetic relationship among organisms is the key in contemporary evolutionary study and sequence analysis is the workhorse towards this goal. Conventional approaches to sequence analysis are based on sequence alignment, which is neither scalable to large-scale datasets due to computational inefficiency nor adaptive to next-generation sequencing (NGS) data. Alignment-free approaches are typically used as computationally effective alternatives yet still suffering the high demand of memory consumption. One desirable sequence comparison method at large-scale requires succinctly-organized sequence data management, as well as prompt sequence retrieval given a never-before-seen sequence as query.ResultsIn this paper, we proposed a novel approach, referred to as SAINT, for efficient and accurate alignment-free sequence comparison. Compared to existing alignment-free sequence comparison methods, SAINT offers advantages in two aspects: (1) SAINT is a weakly-supervised learning method where the embedding function is learned automatically from the easily-acquired data; (2) SAINT utilizes the non-linear deep learning-based model which potentially better captures the complicated relationship among genome sequences. We have applied SAINT to real-world datasets to demonstrate its empirical utility, both qualitatively and quantitatively. Considering the extensive applicability of alignment-free sequence comparison methods, we expect SAINT to motivate a more extensive set of applications in sequence comparison at large scale.AvailabilityThe open source, Apache licensed, python-implemented code will be available upon acceptance.Supplementary informationSupplementary data are available at Bioinformatics online.


2009 ◽  
Vol 39 (3) ◽  
pp. 131-140 ◽  
Author(s):  
Philip R. O. Payne ◽  
Peter J. Embi ◽  
Chandan K. Sen

A common thread throughout the clinical and translational research domains is the need to collect, manage, integrate, analyze, and disseminate large-scale, heterogeneous biomedical data sets. However, well-established and broadly adopted theoretical and practical frameworks and models intended to address such needs are conspicuously absent in the published literature or other reputable knowledge sources. Instead, the development and execution of multidisciplinary, clinical, or translational studies are significantly limited by the propagation of “silos” of both data and expertise. Motivated by this fundamental challenge, we report upon the current state and evolution of biomedical informatics as it pertains to the conduct of high-throughput clinical and translational research and will present both a conceptual and practical framework for the design and execution of informatics-enabled studies. The objective of presenting such findings and constructs is to provide the clinical and translational research community with a common frame of reference for discussing and expanding upon such models and methodologies.


2018 ◽  
Author(s):  
Aritra Mahapatra ◽  
Jayanta Mukherjee

abstractIn our study, we attempt to extract novel features from mitochondrial genomic sequences reflecting their evolutionary traits by our proposed method GRAFree (GRaphical footprint based Alignment-Free method). These features are used to build a phylogenetic tree given a set of species from insect, fish, bird, and mammal. A novel distance measure in the feature space is proposed for the purpose of reflecting the proximity of these species in the evolutionary processes. The distance function is found to be a metric. We have proposed a three step technique to select a feature vector from the feature space. We have carried out variations of these selected feature vectors for generating multiple hypothesis of these trees and finally we used a consensus based tree merging algorithm to obtain the phylogeny. Experimentations were carried out with 157 species covering four different classes such as, Insecta, Actinopterygii, Aves, and Mammalia. We also introduce a measure of quality of the inferred tree especially when the reference tree is not present. The performance of the output tree can be measured at each clade by considering the presence of each species at the corresponding clade. GRAFree can be applied on any graphical representation of genome to reconstruct the phylogenetic tree. We apply our proposed distance function on the selected feature vectors for three naive methods of graphical representation of genome. The inferred tree reflects some accepted evolutionary traits with a high bootstrap support. This concludes that our proposed distance function can be applied to capture the evolutionary relationships of a large number of both close and distance species using graphical methods.


Author(s):  
Denny M. Oliveira ◽  
Eftyhia Zesta ◽  
Piyush M. Mehta ◽  
Richard J. Licata ◽  
Marcin D. Pilinski ◽  
...  

Satellites, crewed spacecraft and stations in low-Earth orbit (LEO) are very sensitive to atmospheric drag. A satellite’s lifetime and orbital tracking become increasingly inaccurate or uncertain during magnetic storms. Given the planned increase of government and private satellite presence in LEO, the need for accurate density predictions for collision avoidance and lifetime optimization, particularly during extreme events, has become an urgent matter and requires comprehensive international collaboration. Additionally, long-term solar activity models and historical data suggest that solar activity will significantly increase in the following years and decades. In this article, we briefly summarize the main achievements in the research of thermosphere response to extreme magnetic storms occurring particularly after the launching of many satellites with state-of-the-art accelerometers from which high-accuracy density can be determined. We find that the performance of an empirical model with data assimilation is higher than its performance without data assimilation during all extreme storm phases. We discuss how forecasting models can be improved by looking into two directions: first, to the past, by adapting historical extreme storm datasets for density predictions, and second, to the future, by facilitating the assimilation of large-scale thermosphere data sets that will be collected in future events. Therefore, this topic is relevant to the scientific community, government agencies that operate satellites, and the private sector with assets operating in LEO.


2017 ◽  
Vol 4 (1) ◽  
pp. 41-52
Author(s):  
Dedy Loebis

This paper presents the results of work undertaken to develop and test contrasting data analysis approaches for the detection of bursts/leaks and other anomalies within wate r supply systems at district meter area (DMA)level. This was conducted for Yorkshire Water (YW) sample data sets from the Harrogate and Dales (H&D), Yorkshire, United Kingdom water supply network as part of Project NEPTUNE EP/E003192/1 ). A data analysissystem based on Kalman filtering and statistical approach has been developed. The system has been applied to the analysis of flow and pressure data. The system was proved for one dataset case and have shown the ability to detect anomalies in flow and pres sure patterns, by correlating with other information. It will be shown that the Kalman/statistical approach is a promising approach at detecting subtle changes and higher frequency features, it has the potential to identify precursor features and smaller l eaks and hence could be useful for monitoring the development of leaks, prior to a large volume burst event.


Author(s):  
A. V. Ponomarev

Introduction: Large-scale human-computer systems involving people of various skills and motivation into the information processing process are currently used in a wide spectrum of applications. An acute problem in such systems is assessing the expected quality of each contributor; for example, in order to penalize incompetent or inaccurate ones and to promote diligent ones.Purpose: To develop a method of assessing the expected contributor’s quality in community tagging systems. This method should only use generally unreliable and incomplete information provided by contributors (with ground truth tags unknown).Results:A mathematical model is proposed for community image tagging (including the model of a contributor), along with a method of assessing the expected contributor’s quality. The method is based on comparing tag sets provided by different contributors for the same images, being a modification of pairwise comparison method with preference relation replaced by a special domination characteristic. Expected contributors’ quality is evaluated as a positive eigenvector of a pairwise domination characteristic matrix. Community tagging simulation has confirmed that the proposed method allows you to adequately estimate the expected quality of community tagging system contributors (provided that the contributors' behavior fits the proposed model).Practical relevance: The obtained results can be used in the development of systems based on coordinated efforts of community (primarily, community tagging systems). 


Author(s):  
Lior Shamir

Abstract Several recent observations using large data sets of galaxies showed non-random distribution of the spin directions of spiral galaxies, even when the galaxies are too far from each other to have gravitational interaction. Here, a data set of $\sim8.7\cdot10^3$ spiral galaxies imaged by Hubble Space Telescope (HST) is used to test and profile a possible asymmetry between galaxy spin directions. The asymmetry between galaxies with opposite spin directions is compared to the asymmetry of galaxies from the Sloan Digital Sky Survey. The two data sets contain different galaxies at different redshift ranges, and each data set was annotated using a different annotation method. The results show that both data sets show a similar asymmetry in the COSMOS field, which is covered by both telescopes. Fitting the asymmetry of the galaxies to cosine dependence shows a dipole axis with probabilities of $\sim2.8\sigma$ and $\sim7.38\sigma$ in HST and SDSS, respectively. The most likely dipole axis identified in the HST galaxies is at $(\alpha=78^{\rm o},\delta=47^{\rm o})$ and is well within the $1\sigma$ error range compared to the location of the most likely dipole axis in the SDSS galaxies with $z>0.15$ , identified at $(\alpha=71^{\rm o},\delta=61^{\rm o})$ .


Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 154
Author(s):  
Marcus Walldén ◽  
Masao Okita ◽  
Fumihiko Ino ◽  
Dimitris Drikakis ◽  
Ioannis Kokkinakis

Increasing processing capabilities and input/output constraints of supercomputers have increased the use of co-processing approaches, i.e., visualizing and analyzing data sets of simulations on the fly. We present a method that evaluates the importance of different regions of simulation data and a data-driven approach that uses the proposed method to accelerate in-transit co-processing of large-scale simulations. We use the importance metrics to simultaneously employ multiple compression methods on different data regions to accelerate the in-transit co-processing. Our approach strives to adaptively compress data on the fly and uses load balancing to counteract memory imbalances. We demonstrate the method’s efficiency through a fluid mechanics application, a Richtmyer–Meshkov instability simulation, showing how to accelerate the in-transit co-processing of simulations. The results show that the proposed method expeditiously can identify regions of interest, even when using multiple metrics. Our approach achieved a speedup of 1.29× in a lossless scenario. The data decompression time was sped up by 2× compared to using a single compression method uniformly.


Sign in / Sign up

Export Citation Format

Share Document