scholarly journals The intrinsic predictability of ecological time series and its potential to guide forecasting

2018 ◽  
Author(s):  
Frank Pennekamp ◽  
Alison C. Iles ◽  
Joshua Garland ◽  
Georgina Brennan ◽  
Ulrich Brose ◽  
...  

AbstractSuccessfully predicting the future states of systems that are complex, stochastic and potentially chaotic is a major challenge. Model forecasting error (FE) is the usual measure of success; however model predictions provide no insights into the potential for improvement. In short, the realized predictability of a specific model is uninformative about whether the system is inherently predictable or whether the chosen model is a poor match for the system and our observations thereof. Ideally, model proficiency would be judged with respect to the systems’ intrinsic predictability – the highest achievable predictability given the degree to which system dynamics are the result of deterministic v. stochastic processes. Intrinsic predictability may be quantified with permutation entropy (PE), a model-free, information-theoretic measure of the complexity of a time series. By means of simulations we show that a correlation exists between estimated PE and FE and show how stochasticity, process error, and chaotic dynamics affect the relationship. This relationship is verified for a dataset of 461 empirical ecological time series. We show how deviations from the expected PE-FE relationship are related to covariates of data quality and the nonlinearity of ecological dynamics.These results demonstrate a theoretically-grounded basis for a model-free evaluation of a system’s intrinsic predictability. Identifying the gap between the intrinsic and realized predictability of time series will enable researchers to understand whether forecasting proficiency is limited by the quality and quantity of their data or the ability of the chosen forecasting model to explain the data. Intrinsic predictability also provides a model-free baseline of forecasting proficiency against which modeling efforts can be evaluated.GlossaryActive information: The amount of information that is available to forecasting models (redundant information minus lost information; Fig. 1).Forecasting error (FE): A measure of the discrepancy between a model’s forecasts and the observed dynamics of a system. Common measures of forecast error are root mean squared error and mean absolute error.Entropy: Measures the average amount of information in the outcome of a stochastic process.Information: Any entity that provides answers and resolves uncertainty about a process. When information is calculated using logarithms to the base two (i.e. information in bits), it is the minimum number of yes/no questions required, on average, to determine the identity of the symbol (Jost 2006). The information in an observation consists of information inherited from the past (redundant information), and of new information.Intrinsic predictability: the maximum achievable predictability of a system (Beckage et al. 2011).Lost information: The part of the redundant information lost due to measurement or sampling error, or transformations of the data (Fig. 1).New information, Shannon entropy rate: The Shannon entropy rate quantifies the average amount of information per observation in a time series that is unrelated to the past, i.e., the new information (Fig. 1).Nonlinearity: When the deterministic processes governing system dynamics depend on the state of the system.Permutation entropy (PE): permutation entropy is a measure of the complexity of a time series (Bandt & Pompe, 2002) that is negatively correlated with a system’s predictability (Garland et al. 2015). Permutation entropy quantifies the combined new and lost information. PE is scaled to range between a minimum of 0 and a maximum of 1.Realized predictability: the achieved predictability of a system from a given forecasting model.Redundant information: The information inherited from the past, and thus the maximum amount of information available for use in forecasting (Fig. 1).Symbols, words, permutations: symbols are simply the smallest unit in a formal language such as the letters in the English alphabet i.e., {“A”, “B”,…, “Z”}. In information theory the alphabet is more abstract, such as elements in the set {“up”, “down”} or {“1”, “2”, “3”}. Words, of length m refer to concatenations of the symbols (e.g., up-down-down) in a set. Permutations are the possible orderings of symbols in a set. In this manuscript, the words are the permutations that arise from the numerical ordering of m data points in a time series.Weighted permutation entropy (WPE): a modification of permutation entropy (Fadlallah et al., 2013) that distinguishes between small-scale, noise-driven variation and large-scale, system-driven variation by considering the magnitudes of changes in addition to the rank-order patterns of PE.


2012 ◽  
Vol 10 (02) ◽  
pp. 1250022 ◽  
Author(s):  
GUO-QIANG HUANG ◽  
CUI-LAN LUO

Two schemes for controlled dense coding with a one-dimensional four-particle cluster state are investigated. In this protocol, the supervisor (Cliff) can control the channel and the average amount of information transmitted from the sender (Alice) to the receiver (Bob) by adjusting the local measurement angle θ. It is shown that the results for the average amounts of information are unique from the different two schemes.



2021 ◽  
Author(s):  
Aniket Chakravorty ◽  
Shyam Sundar Kundu ◽  
Penumetcha Lakshmi Narasa Raju

<p>There has been a noticeable increase in the application of artificial intelligence (AI) algorithms in various areas, in the recent past. One such area is the prediction of rainfall over a region. This application has seen crucial advancement with the use of deep sequential learning algorithms. This new approach to rainfall prediction has also helped increase the utilization of satellite data for prediction. As, AI based prediction algorithms are based on data, the characteristics of it dominates the accuracy of the prediction. And one such characteristic is the information content in the data being used. This information content is classified into redundant information (information of past states in the current state) and new information. The performance of the AI based rainfall prediction depends on the amount of redundant information present in the data being used for training the AI model, more the redundant information (less the new information content) more accurate will be the prediction. Various entropy based measure have been used to quantify the new information content in the data, like permutation entropy, sample entropy, wavelet entropy, etc. This study uses a new measure called the Wavelet Entropy Energy Measure (WEEM). One of the advantages of WEEM is that it considers the dynamics of the process spread across different time scales, which other information measures have not considered explicitly. Since, the dynamics of rainfall is multi-scalar in nature, WEEM is a suitable measure for it. The main goal of this study is to find out the amount of information being generated by INSAT-3D and IMERG rainfall at each time step over the North Eastern Region of India, which will dictate the suitability of the two rainfall product to be used for AI based rainfall prediction.</p>





2009 ◽  
Vol 07 (01) ◽  
pp. 365-372 ◽  
Author(s):  
CUI-LAN LUO ◽  
XIAO-FANG OUYANG

A scheme of realizing controlled dense coding via generalized measurement was presented. In this protocol, the supervisor can control the entanglement between the sender and the receiver and then the average amount of information transmitted from the sender to the receiver by only adjusting measurement angle θ. It is shown that when the quantum channel was a GHZ state, the entanglement and the average amount of information are determined by supervisor's measurement angle θ only; whereas when the quantum channel was a GHZ-class state, those are determined not only by supervisor's measurement angle θ but also the minimal coefficient of the GHZ-class state.



1994 ◽  
Vol 344 (1310) ◽  
pp. 327-327

The technological revolution in molecular biology over the past 10-15 years has opened vast new horizons for exploration. It has also dramatically increased the amount of information available on organisms at the molecular level. The interpretation of this new information, and its management and the design of the experiments which lead to it, has in turn raised challenging problems. Often, mathematical and statistical ideas have been indispensible to progress. As the papers in this volume show, the interaction is not confined to one particular area of the mathematical sciences. In some settings, existing results have been ideally suited to the biological problem. In others, progress has itself stimulated important mathematical advances.



2009 ◽  
Vol 07 (06) ◽  
pp. 1241-1248 ◽  
Author(s):  
GUO-QIANG HUANG ◽  
CUI-LAN LUO

Two schemes for controlled dense coding with a extended GHZ state are investigated. In these protocols, the supervisor (Cliff) can control the average amount of information transmitted from the sender (Alice) to the receiver (Bob) only by adjusting his local measurement angle θ. It is shown that the results for the average amounts of information are unique from the different two schemes.



Author(s):  
Philip Surman

This chapter covers the work carried out on head tracked 3-D displays in the past ten years that has been funded by the European Union. These displays are glasses-free (auto-stereoscopic) and serve several viewers who are able to move freely over a large viewing region. The amount of information that is displayed is kept to a minimum with the use of head position tracking, which allows images to be placed in the viewing field only where the viewers are situated so that redundant information is not directed to unused viewing regions. In order to put the work into perspective, a historical background and a brief description of other display types are given first.



Entropy ◽  
2020 ◽  
Vol 22 (1) ◽  
pp. 89 ◽  
Author(s):  
Mirna Ponce-Flores ◽  
Juan Frausto-Solís ◽  
Guillermo Santamaría-Bonfil ◽  
Joaquín Pérez-Ortega ◽  
Juan J. González-Barbosa

Entropy is a key concept in the characterization of uncertainty for any given signal, and its extensions such as Spectral Entropy and Permutation Entropy. They have been used to measure the complexity of time series. However, these measures are subject to the discretization employed to study the states of the system, and identifying the relationship between complexity measures and the expected performance of the four selected forecasting methods that participate in the M4 Competition. This relationship allows the decision, in advance, of which algorithm is adequate. Therefore, in this paper, we found the relationships between entropy-based complexity framework and the forecasting error of four selected methods (Smyl, Theta, ARIMA, and ETS). Moreover, we present a framework extension based on the Emergence, Self-Organization, and Complexity paradigm. The experimentation with both synthetic and M4 Competition time series show that the feature space induced by complexities, visually constrains the forecasting method performance to specific regions; where the logarithm of its metric error is poorer, the Complexity based on the emergence and self-organization is maximal.



2021 ◽  
Author(s):  
Elizabeth Bradley ◽  
Michael Neuder ◽  
Joshua Garland ◽  
James White ◽  
Edward Dlugokencky

<p>  While it is tempting in experimental practice to seek as high a  data rate as possible, oversampling can become an issue if one takes measurements too densely.  These effects can take many  forms, some of which are easy to detect: e.g., when the data sequence contains multiple copies of the same measured value.  In other situations, as when there is mixing—in the measurement apparatus and/or the system itself—oversampling effects can be harder to detect.  We propose a novel, model-free technique to detect local mixing in time series using an information-theoretic technique called permutation entropy.  By varying the temporal resolution of the calculation and analyzing the patterns in the results, we can determine whether the data are mixed locally, and on what scale.  This can be used by practitioners to choose appropriate lower bounds on scales at which to measure or report data.  After validating this technique on several synthetic examples, we demonstrate its effectiveness on data from a chemistry experiment, methane records from Mauna Loa, and an Antarctic ice core.</p>



2021 ◽  
Vol 15 (3) ◽  
pp. 4-14
Author(s):  
N.M. Akhpasheva ◽  

Statement of the problem. The article is devoted to the translation tradition of the Khakass heroic epic, existing since the second half of the 19 th century and traced to the end of the first decade of the 21 st century. Over the past 10 years, new information about the facts and texts of translations has appeared. This information has been published in various publications, and its connection with the mentioned above translation tradition is not clearly expressed. The establishment of the genesis and general result of the translation tradition of the Khakass heroic epic is relevant in relation to the history and development of intercultural relations in Siberia and Russia as a whole. The purpose of the article is to present new information about the translation tradition of the Khakass heroic epic in its connection with the overall result of translations and to determine its significance against the background of the already known amount of information. Conclusion. The translation tradition of the Khakass heroic epic continues to be relevant as a multifaceted means of intercultural communication.



Sign in / Sign up

Export Citation Format

Share Document