Efficient Change-Points Detection For Genomic Sequences Via Cumulative Segmented Regression

Author(s):  
Shengji Jia ◽  
Lei Shi

Abstract Motivation Knowing the number and the exact locations of multiple change points in genomic sequences serves several biological needs. The cumulative segmented algorithm (cumSeg) has been recently proposed as a computationally efficient approach for multiple change-points detection, which is based on a simple transformation of data and provides results quite robust to model mis-specifications. However, the errors are also accumulated in the transformed model so that heteroscedasticity and serial correlation will show up, and thus the variations of the estimated change points will be quite different, while the locations of the change points should be of the same importance in the original genomic sequences. Results In this study, we develop two new change-points detection procedures in the framework of cumulative segmented regression. Simulations reveal that the proposed methods not only improve the efficiency of each change point estimator substantially but also provide the estimators with similar variations for all the change points. By applying these proposed algorithms to Coriel and SNP genotyping data, we illustrate their performance on detecting copy number variations. Supplementary information The proposed algorithms are implemented in R program and are available at Bioinformatics online.

2018 ◽  
Vol 2018 ◽  
pp. 1-10
Author(s):  
Dan Zhuang ◽  
Youbo Liu

A Fast Screen and Shape Recognition (FSSR) algorithm is proposed with complexity down to O(n) for the multiple change-point detection problems. The proposed FSSR algorithm includes two steps. First, by dividing the data into several subsegments, FSSR algorithm can quickly lock some small subsegments that are likely to contain change-points. Second, through a point by point search in each selected subsegment, FSSR algorithm determines the precise location of the change-point. The simulation study shows that FSSR has obvious speed and stability advantages. Particularly, the sparser the change-points is, the better result will be achieved from FRRS. Finally, we apply FSSR to two real applications to demonstrate its feasibility and robustness. One is the problem of DNA copy number variations identifying; another is the problem of operation scenarios reduction for renewable integrated electrical distribution network.


Author(s):  
Matteo Chiara ◽  
Federico Zambelli ◽  
Marco Antonio Tangaro ◽  
Pietro Mandreoli ◽  
David S Horner ◽  
...  

Abstract Summary While over 200 000 genomic sequences are currently available through dedicated repositories, ad hoc methods for the functional annotation of SARS-CoV-2 genomes do not harness all currently available resources for the annotation of functionally relevant genomic sites. Here, we present CorGAT, a novel tool for the functional annotation of SARS-CoV-2 genomic variants. By comparisons with other state of the art methods we demonstrate that, by providing a more comprehensive and rich annotation, our method can facilitate the identification of evolutionary patterns in the genome of SARS-CoV-2. Availabilityand implementation Galaxy   http://corgat.cloud.ba.infn.it/galaxy; software: https://github.com/matteo14c/CorGAT/tree/Revision_V1; docker: https://hub.docker.com/r/laniakeacloud/galaxy_corgat. Supplementary information Supplementary data are available at Bioinformatics online.


Water ◽  
2021 ◽  
Vol 13 (12) ◽  
pp. 1633
Author(s):  
Elena-Simona Apostol ◽  
Ciprian-Octavian Truică ◽  
Florin Pop ◽  
Christian Esposito

Due to the exponential growth of the Internet of Things networks and the massive amount of time series data collected from these networks, it is essential to apply efficient methods for Big Data analysis in order to extract meaningful information and statistics. Anomaly detection is an important part of time series analysis, improving the quality of further analysis, such as prediction and forecasting. Thus, detecting sudden change points with normal behavior and using them to discriminate between abnormal behavior, i.e., outliers, is a crucial step used to minimize the false positive rate and to build accurate machine learning models for prediction and forecasting. In this paper, we propose a rule-based decision system that enhances anomaly detection in multivariate time series using change point detection. Our architecture uses a pipeline that automatically manages to detect real anomalies and remove the false positives introduced by change points. We employ both traditional and deep learning unsupervised algorithms, in total, five anomaly detection and five change point detection algorithms. Additionally, we propose a new confidence metric based on the support for a time series point to be an anomaly and the support for the same point to be a change point. In our experiments, we use a large real-world dataset containing multivariate time series about water consumption collected from smart meters. As an evaluation metric, we use Mean Absolute Error (MAE). The low MAE values show that the algorithms accurately determine anomalies and change points. The experimental results strengthen our assumption that anomaly detection can be improved by determining and removing change points as well as validates the correctness of our proposed rules in real-world scenarios. Furthermore, the proposed rule-based decision support systems enable users to make informed decisions regarding the status of the water distribution network and perform effectively predictive and proactive maintenance.


2001 ◽  
Vol 38 (04) ◽  
pp. 1033-1054 ◽  
Author(s):  
Liudas Giraitis ◽  
Piotr Kokoszka ◽  
Remigijus Leipus

The paper studies the impact of a broadly understood trend, which includes a change point in mean and monotonic trends studied by Bhattacharyaet al.(1983), on the asymptotic behaviour of a class of tests designed to detect long memory in a stationary sequence. Our results pertain to a family of tests which are similar to Lo's (1991) modifiedR/Stest. We show that both long memory and nonstationarity (presence of trend or change points) can lead to rejection of the null hypothesis of short memory, so that further testing is needed to discriminate between long memory and some forms of nonstationarity. We provide quantitative description of trends which do or do not fool theR/S-type long memory tests. We show, in particular, that a shift in mean of a magnitude larger thanN-½, whereNis the sample size, affects the asymptotic size of the tests, whereas smaller shifts do not do so.


Author(s):  
Barbora Peštová ◽  
Michal Pešta

Panel data of our interest consist of a moderate number of panels, while the panels contain a small number of observations. An estimator of common breaks in panel means without a boundary issue for this kind of scenario is proposed. In particular, the novel estimator is able to detect a common break point even when the change happens immediately after the first time point or just before the last observation period. Another advantage of the elaborated change point estimator is that it results in the last observation in situations with no structural breaks. The consistency of the change point estimator in panel data is established. The results are illustrated through a simulation study. As a by-product of the developed estimation technique, a theoretical utilization for correlation structure estimation, hypothesis testing, and bootstrapping in panel data is demonstrated. A practical application to non-life insurance is presented as well.


2013 ◽  
Author(s):  
Greg Jensen

Identifying discontinuities (or change-points) in otherwise stationary time series is a powerful analytic tool. This paper outlines a general strategy for identifying an unknown number of change-points using elementary principles of Bayesian statistics. Using a strategy of binary partitioning by marginal likelihood, a time series is recursively subdivided on the basis of whether adding divisions (and thus increasing model complexity) yields a justified improvement in the marginal model likelihood. When this approach is combined with the use of conjugate priors, it yields the Conjugate Partitioned Recursion (CPR) algorithm, which identifies change-points without computationally intensive numerical integration. Using the CPR algorithm, methods are described for specifying change-point models drawn from a host of familiar distributions, both discrete (binomial, geometric, Poisson) and continuous (exponential, Gaussian, uniform, and multiple linear regression), as well as multivariate distribution (multinomial, multivariate normal, and multivariate linear regression). Methods by which the CPR algorithm could be extended or modified are discussed, and several detailed applications to data published in psychology and biomedical engineering are described.


2016 ◽  
Vol 41 (4) ◽  
pp. 550-558 ◽  
Author(s):  
Wei Wu ◽  
Fan Jia ◽  
Richard Kinai ◽  
Todd D. Little

Spline growth modelling is a popular tool to model change processes with distinct phases and change points in longitudinal studies. Focusing on linear spline growth models with two phases and a fixed change point (the transition point from one phase to the other), we detail how to find optimal data collection designs that maximize the efficiency of detecting key parameters in the spline models, holding the total number of data points or sample size constant. We identify efficient designs for the cases where (a) the exact location of the change point is known (complete certainty), (b) only the interval that contains the change point is known (partial certainty), and (c) no prior knowledge on the location of the change point is available (zero certainty). We conclude with recommendations for optimal number and allocation of data collection points.


Circulation ◽  
2018 ◽  
Vol 137 (suppl_1) ◽  
Author(s):  
Norrina B Allen ◽  
Amy Krefman ◽  
Darwin Labarthe ◽  
Philip Greenland ◽  
Markus Juonala ◽  
...  

Background: The prevalence of Ideal Cardiovascular Health (CVH) decreases with age, beginning in childhood. However, more precise estimates of trajectories of CVH across the lifespan are needed to guide intervention. The aims of this analysis are to describe trajectories in CVH from childhood through middle age and examine whether there are critical inflection points in the decline in CVH. Methods: We pooled data from five prospective childhood/early adulthood cohorts including Bogalusa, Young Finns, HB!, CARDIA, and STRIP. Clinical CVH factors—blood pressure, BMI, cholesterol, glucose—were categorized as poor, intermediate and ideal then summed to create a clinical CVH score, ranging from 0 to 8 (higher score= more ideal CVH). The association between clinical CVH score and age in years was modeled using a segmented linear mixed model, with a random participant intercept, fixed slopes, and fixed change points. Change points were estimated using an extension of the R package ‘segmented’ which utilizes a likelihood based approach to iteratively determine one or more change points. All models were adjusted for race, gender and cohort. Results: This study included 18,290 participants (51% female, 67% White, 46% between the ages of 8-11 at baseline). CVH scores decline with age from 8 through 55 years. We found two ages at which the slope of the CVH trajectories change significantly. CVH scores are generally stable from age 8 until the first change point at age 17 (95% CI 16.3-17.4), when they begin to decline more rapidly with a 0.08 CVH unit loss per year from age 17 to 30. The second change point occurs at age 30 (26.7-33.6) when the rate of decline increases by an additional 0.01 units per year. Conclusion: The clinical CVH score declines from favorable levels from childhood through adulthood, with a rapid decline starting at age 17 that becomes slightly steeper from age 30 to 55 years. These inflection points signal that there are critical periods in an individual’s clinical CVH trajectory during which prevention efforts may be targeted.


2020 ◽  
Vol 36 (10) ◽  
pp. 3263-3265 ◽  
Author(s):  
Lucas Czech ◽  
Pierre Barbera ◽  
Alexandros Stamatakis

Abstract Summary We present genesis, a library for working with phylogenetic data, and gappa, an accompanying command-line tool for conducting typical analyses on such data. The tools target phylogenetic trees and phylogenetic placements, sequences, taxonomies and other relevant data types, offer high-level simplicity as well as low-level customizability, and are computationally efficient, well-tested and field-proven. Availability and implementation Both genesis and gappa are written in modern C++11, and are freely available under GPLv3 at http://github.com/lczech/genesis and http://github.com/lczech/gappa. Supplementary information Supplementary data are available at Bioinformatics online.


Sign in / Sign up

Export Citation Format

Share Document