Comparing different approaches to compute Permutation Entropy with coarse time series

2019 ◽  
Vol 513 ◽  
pp. 635-643 ◽  
Author(s):  
Francisco Traversaro ◽  
Nicolás Ciarrocchi ◽  
Florencia Pollo Cattaneo ◽  
Francisco Redelico
2018 ◽  
Vol 48 (10) ◽  
pp. 2877-2897
Author(s):  
Emad Ashtari Nezhad ◽  
Yadollah Waghei ◽  
G. R. Mohtashami Borzadaran ◽  
H. R. Nilli Sani ◽  
Hadi Alizadeh Noughabi

2021 ◽  
pp. 2150055
Author(s):  
Qin Zhou ◽  
Pengjian Shang

Cumulative residual entropy (CRE) has been suggested as a new measure to quantify uncertainty of nonlinear time series signals. Combined with permutation entropy and Rényi entropy, we introduce a generalized measure of CRE at multiple scales, namely generalized cumulative residual entropy (GCRE), and further propose a modification of GCRE procedure by the weighting scheme — weighted generalized cumulative residual entropy (WGCRE). The GCRE and WGCRE methods are performed on the synthetic series to study properties of parameters and verify the validity of measuring complexity of the series. After that, the GCRE and WGCRE methods are applied to the US, European and Chinese stock markets. Through data analysis and statistics comparison, the proposed methods can effectively distinguish stock markets with different characteristics.


Entropy ◽  
2019 ◽  
Vol 21 (4) ◽  
pp. 385 ◽  
Author(s):  
David Cuesta-Frau ◽  
Juan Pablo Murillo-Escobar ◽  
Diana Alexandra Orrego ◽  
Edilson Delgado-Trejos

Permutation Entropy (PE) is a time series complexity measure commonly used in a variety of contexts, with medicine being the prime example. In its general form, it requires three input parameters for its calculation: time series length N, embedded dimension m, and embedded delay τ . Inappropriate choices of these parameters may potentially lead to incorrect interpretations. However, there are no specific guidelines for an optimal selection of N, m, or τ , only general recommendations such as N > > m ! , τ = 1 , or m = 3 , … , 7 . This paper deals specifically with the study of the practical implications of N > > m ! , since long time series are often not available, or non-stationary, and other preliminary results suggest that low N values do not necessarily invalidate PE usefulness. Our study analyses the PE variation as a function of the series length N and embedded dimension m in the context of a diverse experimental set, both synthetic (random, spikes, or logistic model time series) and real–world (climatology, seismic, financial, or biomedical time series), and the classification performance achieved with varying N and m. The results seem to indicate that shorter lengths than those suggested by N > > m ! are sufficient for a stable PE calculation, and even very short time series can be robustly classified based on PE measurements before the stability point is reached. This may be due to the fact that there are forbidden patterns in chaotic time series, not all the patterns are equally informative, and differences among classes are already apparent at very short lengths.


Entropy ◽  
2019 ◽  
Vol 21 (10) ◽  
pp. 1013 ◽  
Author(s):  
David Cuesta-Frau ◽  
Antonio Molina-Picó ◽  
Borja Vargas ◽  
Paula González

Many measures to quantify the nonlinear dynamics of a time series are based on estimating the probability of certain features from their relative frequencies. Once a normalised histogram of events is computed, a single result is usually derived. This process can be broadly viewed as a nonlinear I R n mapping into I R , where n is the number of bins in the histogram. However, this mapping might entail a loss of information that could be critical for time series classification purposes. In this respect, the present study assessed such impact using permutation entropy (PE) and a diverse set of time series. We first devised a method of generating synthetic sequences of ordinal patterns using hidden Markov models. This way, it was possible to control the histogram distribution and quantify its influence on classification results. Next, real body temperature records are also used to illustrate the same phenomenon. The experiments results confirmed the improved classification accuracy achieved using raw histogram data instead of the PE final values. Thus, this study can provide a very valuable guidance for the improvement of the discriminating capability not only of PE, but of many similar histogram-based measures.


2017 ◽  
Vol 381 (22) ◽  
pp. 1883-1892 ◽  
Author(s):  
Luciano Zunino ◽  
Felipe Olivares ◽  
Felix Scholkmann ◽  
Osvaldo A. Rosso

2020 ◽  
Vol 2020 ◽  
pp. 1-10 ◽  
Author(s):  
Hao Du ◽  
Hao Gong ◽  
Suyue Han ◽  
Peng Zheng ◽  
Bin Liu ◽  
...  

Reconstruction of realistic economic data often causes social economists to analyze the underlying driving factors in time-series data or to study volatility. The intrinsic complexity of time-series data interests and attracts social economists. This paper proposes the bilateral permutation entropy (BPE) index method to solve the problem based on partly ensemble empirical mode decomposition (PEEMD), which was proposed as a novel data analysis method for nonlinear and nonstationary time series compared with the T-test method. First, PEEMD is extended to the case of gold price analysis in this paper for decomposition into several independent intrinsic mode functions (IMFs), from high to low frequency. Second, IMFs comprise three parts, including a high-frequency part, low-frequency part, and the whole trend based on a fine-to-coarse reconstruction by the BPE index method and the T-test method. Then, this paper conducts a correlation analysis on the basis of the reconstructed data and the related affected macroeconomic factors, including global gold production, world crude oil prices, and world inflation. Finally, the BPE index method is evidently a vitally significant technique for time-series data analysis in terms of reconstructed IMFs to obtain realistic data.


Author(s):  
M. McCullough ◽  
M. Small ◽  
H. H. C. Iu ◽  
T. Stemler

In this study, we propose a new information theoretic measure to quantify the complexity of biological systems based on time-series data. We demonstrate the potential of our method using two distinct applications to human cardiac dynamics. Firstly, we show that the method clearly discriminates between segments of electrocardiogram records characterized by normal sinus rhythm, ventricular tachycardia and ventricular fibrillation. Secondly, we investigate the multiscale complexity of cardiac dynamics with respect to age in healthy individuals using interbeat interval time series and compare our findings with a previous study which established a link between age and fractal-like long-range correlations. The method we use is an extension of the symbolic mapping procedure originally proposed for permutation entropy. We build a Markov chain of the dynamics based on order patterns in the time series which we call an ordinal network, and from this model compute an intuitive entropic measure of transitional complexity. A discussion of the model parameter space in terms of traditional time delay embedding provides a theoretical basis for our multiscale approach. As an ancillary discussion, we address the practical issue of node aliasing and how this effects ordinal network models of continuous systems from discrete time sampled data, such as interbeat interval time series. This article is part of the themed issue ‘Mathematical methods in medicine: neuroscience, cardiology and pathology’.


Sign in / Sign up

Export Citation Format

Share Document