scholarly journals Teaching Ordinal Patterns to a Computer: Efficient Encoding Algorithms Based on the Lehmer Code

Entropy ◽  
2019 ◽  
Vol 21 (10) ◽  
pp. 1023 ◽  
Author(s):  
Sebastian Berger ◽  
Andrii Kravtsiv ◽  
Gerhard Schneider ◽  
Denis Jordan

Ordinal patterns are the common basis of various techniques used in the study of dynamical systems and nonlinear time series analysis. The present article focusses on the computational problem of turning time series into sequences of ordinal patterns. In a first step, a numerical encoding scheme for ordinal patterns is proposed. Utilising the classical Lehmer code, it enumerates ordinal patterns by consecutive non-negative integers, starting from zero. This compact representation considerably simplifies working with ordinal patterns in the digital domain. Subsequently, three algorithms for the efficient extraction of ordinal patterns from time series are discussed, including previously published approaches that can be adapted to the Lehmer code. The respective strengths and weaknesses of those algorithms are discussed, and further substantiated by benchmark results. One of the algorithms stands out in terms of scalability: its run-time increases linearly with both the pattern order and the sequence length, while its memory footprint is practically negligible. These properties enable the study of high-dimensional pattern spaces at low computational cost. In summary, the tools described herein may improve the efficiency of virtually any ordinal pattern-based analysis method, among them quantitative measures like permutation entropy and symbolic transfer entropy, but also techniques like forbidden pattern identification. Moreover, the concepts presented may allow for putting ideas into practice that up to now had been hindered by computational burden. To enable smooth evaluation, a function library written in the C programming language, as well as language bindings and native implementations for various numerical computation environments are provided in the supplements.


Entropy ◽  
2019 ◽  
Vol 21 (10) ◽  
pp. 1013 ◽  
Author(s):  
David Cuesta-Frau ◽  
Antonio Molina-Picó ◽  
Borja Vargas ◽  
Paula González

Many measures to quantify the nonlinear dynamics of a time series are based on estimating the probability of certain features from their relative frequencies. Once a normalised histogram of events is computed, a single result is usually derived. This process can be broadly viewed as a nonlinear I R n mapping into I R , where n is the number of bins in the histogram. However, this mapping might entail a loss of information that could be critical for time series classification purposes. In this respect, the present study assessed such impact using permutation entropy (PE) and a diverse set of time series. We first devised a method of generating synthetic sequences of ordinal patterns using hidden Markov models. This way, it was possible to control the histogram distribution and quantify its influence on classification results. Next, real body temperature records are also used to illustrate the same phenomenon. The experiments results confirmed the improved classification accuracy achieved using raw histogram data instead of the PE final values. Thus, this study can provide a very valuable guidance for the improvement of the discriminating capability not only of PE, but of many similar histogram-based measures.



Entropy ◽  
2020 ◽  
Vol 22 (5) ◽  
pp. 494
Author(s):  
David Cuesta-Frau

Despite its widely tested and proven usefulness, there is still room for improvement in the basic permutation entropy (PE) algorithm, as several subsequent studies have demonstrated in recent years. Some of these new methods try to address the well-known PE weaknesses, such as its focus only on ordinal and not on amplitude information, and the possible detrimental impact of equal values found in subsequences. Other new methods address less specific weaknesses, such as the PE results’ dependence on input parameter values, a common problem found in many entropy calculation methods. The lack of discriminating power among classes in some cases is also a generic problem when entropy measures are used for data series classification. This last problem is the one specifically addressed in the present study. Toward that purpose, the classification performance of the standard PE method was first assessed by conducting several time series classification tests over a varied and diverse set of data. Then, this performance was reassessed using a new Shannon Entropy normalisation scheme proposed in this paper: divide the relative frequencies in PE by the number of different ordinal patterns actually found in the time series, instead of by the theoretically expected number. According to the classification accuracy obtained, this last approach exhibited a higher class discriminating power. It was capable of finding significant differences in six out of seven experimental datasets—whereas the standard PE method only did in four—and it also had better classification accuracy. It can be concluded that using the additional information provided by the number of forbidden/found patterns, it is possible to achieve a higher discriminating power than using the classical PE normalisation method. The resulting algorithm is also very similar to that of PE and very easy to implement.



2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Massimiliano Zanin ◽  
Felipe Olivares

AbstractOne of the most important aspects of time series is their degree of stochasticity vs. chaoticity. Since the discovery of chaotic maps, many algorithms have been proposed to discriminate between these two alternatives and assess their prevalence in real-world time series. Approaches based on the combination of “permutation patterns” with different metrics provide a more complete picture of a time series’ nature, and are especially useful to tackle pathological chaotic maps. Here, we provide a review of such approaches, their theoretical foundations, and their application to discrete time series and real-world problems. We compare their performance using a set of representative noisy chaotic maps, evaluate their applicability through their respective computational cost, and discuss their limitations.



2018 ◽  
Vol 28 (12) ◽  
pp. 123111 ◽  
Author(s):  
J. H. Martínez ◽  
J. L. Herrera-Diestra ◽  
M. Chavez


2021 ◽  
Vol 5 (1) ◽  
pp. 51
Author(s):  
Enriqueta Vercher ◽  
Abel Rubio ◽  
José D. Bermúdez

We present a new forecasting scheme based on the credibility distribution of fuzzy events. This approach allows us to build prediction intervals using the first differences of the time series data. Additionally, the credibility expected value enables us to estimate the k-step-ahead pointwise forecasts. We analyze the coverage of the prediction intervals and the accuracy of pointwise forecasts using different credibility approaches based on the upper differences. The comparative results were obtained working with yearly time series from the M4 Competition. The performance and computational cost of our proposal, compared with automatic forecasting procedures, are presented.



2019 ◽  
Vol 513 ◽  
pp. 635-643 ◽  
Author(s):  
Francisco Traversaro ◽  
Nicolás Ciarrocchi ◽  
Florencia Pollo Cattaneo ◽  
Francisco Redelico


2017 ◽  
Author(s):  
Federica Pardini ◽  
Mike Burton ◽  
Fabio Arzilli ◽  
Giuseppe La Spina ◽  
Margherita Polacci

Abstract. Quantifying time-series of sulphur dioxide (SO2) emissions during explosive eruptions provides insight into volcanic processes, assists in volcanic hazard mitigation, and permits quantification of the climatic impact of major eruptions. While volcanic SO2 is routinely detected from space during eruptions, the retrieval of plume injection height and SO2 flux time-series remains challenging. Here we present a new numerical method based on forward- and backward-trajectory analyses which enable such time-series to be robustly determined. The method is applied to satellite images of volcanic eruption clouds through the integration of the HYSPLIT software with custom-designed Python routines in a fully automated manner. Plume injection height and SO2 flux time-series are computed with a period of ~ 10 minutes with low computational cost. Using this technique, we investigated the SO2 emissions from two sub-Plinian eruptions of Calbuco, Chile, produced in April 2015. We found a mean injection height above the vent of ~ 15 km for the two eruptions, with overshooting tops reaching ~ 20 km. We calculated a total of 300 ± 46 kt of SO2 released almost equally during both events, with 160 ± 30 kt produced by the first event and 140 ± 35 kt by the second. The retrieved SO2 flux time-series show an intense gas release during the first eruption (average flux of 2560 kt day−1), while a lower SO2 flux profile was seen for the second (average flux 560 kt day−1), suggesting that the first eruption was richer in SO2. This result is exemplified by plotting SO2 flux against retrieved plume height above the vent, revealing distinct trends for the two events. We propose that a pre-erupted exsolved volatile phase was present prior to the first event, which could have led to the necessary overpressure to trigger the eruption. The second eruption, instead, was mainly driven by syneruptive degassing. This hypothesis is supported by melt inclusion measurements of sulfur concentrations in plagioclase phenocrysts and groundmass glass of tephra samples through electron microprobe analysis. This work demonstrates that detailed interpretations of sub-surface magmatic processes during eruptions are possible using satellite SO2 data. Quantitative comparisons of high temporal resolution plume height and SO2 flux time-series offer a powerful tool to examine processes triggering and controlling eruptions. These novel tools open a new frontier in space-based volcanological research, and will be of great value when applied to remote, poorly monitored volcanoes, and to major eruptions that can have regional and global climate implications through, for example, influencing ozone depletion in the stratosphere and light scattering from stratospheric aerosols.





Sign in / Sign up

Export Citation Format

Share Document