APPLYING THE PRODUCT PARTITION MODEL TO THE IDENTIFICATION OF MULTIPLE CHANGE POINTS

2002 ◽  
Vol 05 (04) ◽  
pp. 371-387 ◽  
Author(s):  
R. H. LOSCHI ◽  
F. R. B. CRUZ

The multiple change point identification problem may be encountered in many subject areas, including disease mapping, medical diagnosis, industrial control, and finance. One appealing way of tackling the problem is through the product partition model (PPM), a Bayesian approach. Nowadays, practical applications of Bayesian methods have attracted attention perhaps because of the generalized use of powerful and inexpensive personal computers. A Gibbs sampling scheme, simple and easy to implement, is used to obtain the estimates. We apply the algorithm to the analysis of two important stock market data in Brazil. The results show that the method is efficient and effective in analyzing change point problems.

2005 ◽  
Vol 08 (04) ◽  
pp. 465-482 ◽  
Author(s):  
R. H. LOSCHI ◽  
F. R. B. CRUZ

The identification of multiple change points is a problem shared by many subject areas, including disease and criminality mapping, medical diagnosis, industrial control, and finance. An algorithm based on the Product Partition Model (PPM) is developed to solve the multiple change point identification problem in Poisson data sequences. In order to address the PPM, a simple and easy way to implement Gibbs sampling scheme is derived. A sensitivity analysis is performed, for different prior specifications. The algorithm is then applied to the analysis of a real data sequence. The results show that the method is quite effective and provides useful inferences.


Water ◽  
2021 ◽  
Vol 13 (12) ◽  
pp. 1633
Author(s):  
Elena-Simona Apostol ◽  
Ciprian-Octavian Truică ◽  
Florin Pop ◽  
Christian Esposito

Due to the exponential growth of the Internet of Things networks and the massive amount of time series data collected from these networks, it is essential to apply efficient methods for Big Data analysis in order to extract meaningful information and statistics. Anomaly detection is an important part of time series analysis, improving the quality of further analysis, such as prediction and forecasting. Thus, detecting sudden change points with normal behavior and using them to discriminate between abnormal behavior, i.e., outliers, is a crucial step used to minimize the false positive rate and to build accurate machine learning models for prediction and forecasting. In this paper, we propose a rule-based decision system that enhances anomaly detection in multivariate time series using change point detection. Our architecture uses a pipeline that automatically manages to detect real anomalies and remove the false positives introduced by change points. We employ both traditional and deep learning unsupervised algorithms, in total, five anomaly detection and five change point detection algorithms. Additionally, we propose a new confidence metric based on the support for a time series point to be an anomaly and the support for the same point to be a change point. In our experiments, we use a large real-world dataset containing multivariate time series about water consumption collected from smart meters. As an evaluation metric, we use Mean Absolute Error (MAE). The low MAE values show that the algorithms accurately determine anomalies and change points. The experimental results strengthen our assumption that anomaly detection can be improved by determining and removing change points as well as validates the correctness of our proposed rules in real-world scenarios. Furthermore, the proposed rule-based decision support systems enable users to make informed decisions regarding the status of the water distribution network and perform effectively predictive and proactive maintenance.


2001 ◽  
Vol 38 (04) ◽  
pp. 1033-1054 ◽  
Author(s):  
Liudas Giraitis ◽  
Piotr Kokoszka ◽  
Remigijus Leipus

The paper studies the impact of a broadly understood trend, which includes a change point in mean and monotonic trends studied by Bhattacharyaet al.(1983), on the asymptotic behaviour of a class of tests designed to detect long memory in a stationary sequence. Our results pertain to a family of tests which are similar to Lo's (1991) modifiedR/Stest. We show that both long memory and nonstationarity (presence of trend or change points) can lead to rejection of the null hypothesis of short memory, so that further testing is needed to discriminate between long memory and some forms of nonstationarity. We provide quantitative description of trends which do or do not fool theR/S-type long memory tests. We show, in particular, that a shift in mean of a magnitude larger thanN-½, whereNis the sample size, affects the asymptotic size of the tests, whereas smaller shifts do not do so.


2013 ◽  
Author(s):  
Greg Jensen

Identifying discontinuities (or change-points) in otherwise stationary time series is a powerful analytic tool. This paper outlines a general strategy for identifying an unknown number of change-points using elementary principles of Bayesian statistics. Using a strategy of binary partitioning by marginal likelihood, a time series is recursively subdivided on the basis of whether adding divisions (and thus increasing model complexity) yields a justified improvement in the marginal model likelihood. When this approach is combined with the use of conjugate priors, it yields the Conjugate Partitioned Recursion (CPR) algorithm, which identifies change-points without computationally intensive numerical integration. Using the CPR algorithm, methods are described for specifying change-point models drawn from a host of familiar distributions, both discrete (binomial, geometric, Poisson) and continuous (exponential, Gaussian, uniform, and multiple linear regression), as well as multivariate distribution (multinomial, multivariate normal, and multivariate linear regression). Methods by which the CPR algorithm could be extended or modified are discussed, and several detailed applications to data published in psychology and biomedical engineering are described.


2016 ◽  
Vol 41 (4) ◽  
pp. 550-558 ◽  
Author(s):  
Wei Wu ◽  
Fan Jia ◽  
Richard Kinai ◽  
Todd D. Little

Spline growth modelling is a popular tool to model change processes with distinct phases and change points in longitudinal studies. Focusing on linear spline growth models with two phases and a fixed change point (the transition point from one phase to the other), we detail how to find optimal data collection designs that maximize the efficiency of detecting key parameters in the spline models, holding the total number of data points or sample size constant. We identify efficient designs for the cases where (a) the exact location of the change point is known (complete certainty), (b) only the interval that contains the change point is known (partial certainty), and (c) no prior knowledge on the location of the change point is available (zero certainty). We conclude with recommendations for optimal number and allocation of data collection points.


Author(s):  
Shengji Jia ◽  
Lei Shi

Abstract Motivation Knowing the number and the exact locations of multiple change points in genomic sequences serves several biological needs. The cumulative segmented algorithm (cumSeg) has been recently proposed as a computationally efficient approach for multiple change-points detection, which is based on a simple transformation of data and provides results quite robust to model mis-specifications. However, the errors are also accumulated in the transformed model so that heteroscedasticity and serial correlation will show up, and thus the variations of the estimated change points will be quite different, while the locations of the change points should be of the same importance in the original genomic sequences. Results In this study, we develop two new change-points detection procedures in the framework of cumulative segmented regression. Simulations reveal that the proposed methods not only improve the efficiency of each change point estimator substantially but also provide the estimators with similar variations for all the change points. By applying these proposed algorithms to Coriel and SNP genotyping data, we illustrate their performance on detecting copy number variations. Supplementary information The proposed algorithms are implemented in R program and are available at Bioinformatics online.


Circulation ◽  
2018 ◽  
Vol 137 (suppl_1) ◽  
Author(s):  
Norrina B Allen ◽  
Amy Krefman ◽  
Darwin Labarthe ◽  
Philip Greenland ◽  
Markus Juonala ◽  
...  

Background: The prevalence of Ideal Cardiovascular Health (CVH) decreases with age, beginning in childhood. However, more precise estimates of trajectories of CVH across the lifespan are needed to guide intervention. The aims of this analysis are to describe trajectories in CVH from childhood through middle age and examine whether there are critical inflection points in the decline in CVH. Methods: We pooled data from five prospective childhood/early adulthood cohorts including Bogalusa, Young Finns, HB!, CARDIA, and STRIP. Clinical CVH factors—blood pressure, BMI, cholesterol, glucose—were categorized as poor, intermediate and ideal then summed to create a clinical CVH score, ranging from 0 to 8 (higher score= more ideal CVH). The association between clinical CVH score and age in years was modeled using a segmented linear mixed model, with a random participant intercept, fixed slopes, and fixed change points. Change points were estimated using an extension of the R package ‘segmented’ which utilizes a likelihood based approach to iteratively determine one or more change points. All models were adjusted for race, gender and cohort. Results: This study included 18,290 participants (51% female, 67% White, 46% between the ages of 8-11 at baseline). CVH scores decline with age from 8 through 55 years. We found two ages at which the slope of the CVH trajectories change significantly. CVH scores are generally stable from age 8 until the first change point at age 17 (95% CI 16.3-17.4), when they begin to decline more rapidly with a 0.08 CVH unit loss per year from age 17 to 30. The second change point occurs at age 30 (26.7-33.6) when the rate of decline increases by an additional 0.01 units per year. Conclusion: The clinical CVH score declines from favorable levels from childhood through adulthood, with a rapid decline starting at age 17 that becomes slightly steeper from age 30 to 55 years. These inflection points signal that there are critical periods in an individual’s clinical CVH trajectory during which prevention efforts may be targeted.


Mathematics ◽  
2020 ◽  
Vol 8 (1) ◽  
pp. 142 ◽  
Author(s):  
Qianli Zhou ◽  
Hongming Mo ◽  
Yong Deng

As the extension of the fuzzy sets (FSs) theory, the intuitionistic fuzzy sets (IFSs) play an important role in handling the uncertainty under the uncertain environments. The Pythagoreanfuzzy sets (PFSs) proposed by Yager in 2013 can deal with more uncertain situations than intuitionistic fuzzy sets because of its larger range of describing the membership grades. How to measure the distance of Pythagorean fuzzy sets is still an open issue. Jensen–Shannon divergence is a useful distance measure in the probability distribution space. In order to efficiently deal with uncertainty in practical applications, this paper proposes a new divergence measure of Pythagorean fuzzy sets, which is based on the belief function in Dempster–Shafer evidence theory, and is called PFSDM distance. It describes the Pythagorean fuzzy sets in the form of basic probability assignments (BPAs) and calculates the divergence of BPAs to get the divergence of PFSs, which is the step in establishing a link between the PFSs and BPAs. Since the proposed method combines the characters of belief function and divergence, it has a more powerful resolution than other existing methods. Additionally, an improved algorithm using PFSDM distance is proposed in medical diagnosis, which can avoid producing counter-intuitive results especially when a data conflict exists. The proposed method and the magnified algorithm are both demonstrated to be rational and practical in applications.


Mathematics ◽  
2019 ◽  
Vol 7 (5) ◽  
pp. 474 ◽  
Author(s):  
Muhammad Rizwan Khan ◽  
Biswajit Sarkar

Airborne particulate matter (PM) is a key air pollutant that affects human health adversely. Exposure to high concentrations of such particles may cause premature death, heart disease, respiratory problems, or reduced lung function. Previous work on particulate matter ( P M 2.5 and P M 10 ) was limited to specific areas. Therefore, more studies are required to investigate airborne particulate matter patterns due to their complex and varying properties, and their associated ( P M 10 and P M 2.5 ) concentrations and compositions to assess the numerical productivity of pollution control programs for air quality. Consequently, to control particulate matter pollution and to make effective plans for counter measurement, it is important to measure the efficiency and efficacy of policies applied by the Ministry of Environment. The primary purpose of this research is to construct a simulation model for the identification of a change point in particulate matter ( P M 2.5 and P M 10 ) concentration, and if it occurs in different areas of the world. The methodology is based on the Bayesian approach for the analysis of different data structures and a likelihood ratio test is used to a detect change point at unknown time (k). Real time data of particulate matter concentrations at different locations has been used for numerical verification. The model parameters before change point ( θ ) and parameters after change point ( λ ) have been critically analyzed so that the proficiency and success of environmental policies for particulate matter ( P M 2.5 and P M 10 ) concentrations can be evaluated. The main reason for using different areas is their considerably different features, i.e., environment, population densities, and transportation vehicle densities. Consequently, this study also provides insights about how well this suggested model could perform in different areas.


Sign in / Sign up

Export Citation Format

Share Document