scholarly journals Variable-lag Granger Causality and Transfer Entropy for Time Series Analysis

2021 ◽  
Vol 15 (4) ◽  
pp. 1-30
Author(s):  
Chainarong Amornbunchornvej ◽  
Elena Zheleva ◽  
Tanya Berger-Wolf

Granger causality is a fundamental technique for causal inference in time series data, commonly used in the social and biological sciences. Typical operationalizations of Granger causality make a strong assumption that every time point of the effect time series is influenced by a combination of other time series with a fixed time delay. The assumption of fixed time delay also exists in Transfer Entropy, which is considered to be a non-linear version of Granger causality. However, the assumption of the fixed time delay does not hold in many applications, such as collective behavior, financial markets, and many natural phenomena. To address this issue, we develop Variable-lag Granger causality and Variable-lag Transfer Entropy, generalizations of both Granger causality and Transfer Entropy that relax the assumption of the fixed time delay and allow causes to influence effects with arbitrary time delays. In addition, we propose methods for inferring both Variable-lag Granger causality and Transfer Entropy relations. In our approaches, we utilize an optimal warping path of Dynamic Time Warping to infer variable-lag causal relations. We demonstrate our approaches on an application for studying coordinated collective behavior and other real-world casual-inference datasets and show that our proposed approaches perform better than several existing methods in both simulated and real-world datasets. Our approaches can be applied in any domain of time series analysis. The software of this work is available in the R-CRAN package: VLTimeCausality.

Author(s):  
Ray Huffaker ◽  
Marco Bittelli ◽  
Rodolfo Rosa

Nonlinear Time Series Analysis (NLTS) provides a mathematically rigorous collection of techniques designed to reconstruct real-world system dynamics from time series data on a single variable or multiple causally-related variables. NLTS facilitates scientific inquiry that emphasizes strong supportive evidence, well-conducted and thorough inquiry, and realism. Data provide an essential evidentiary portal to a reality to which we have only limited access. Random-appearing data do not prove that underlying dynamic process are subject to exogenous inherently-random forces. The possibility exists that observed volatility is generated by inherently-unstable, deterministic, and nonlinear real-world dynamic systems. NLTS allows the data to speak regarding which type of system dynamics generated them. It is capable of detecting linear as well as nonlinear deterministic system dynamics, and diagnosing the presence of linear stochastic dynamics. Our objective is to use NLTS to uncover the structure best corresponding to reality whether it be linear, nonlinear, deterministic, or stochastic. Accurate diagnosis of real-world dynamics from observed data is crucial to develop valid theory, and to formulate effective public policy based on theory.


2011 ◽  
Vol 3 (2) ◽  
pp. 11-15
Author(s):  
Seng Hansun

Recently, there are so many soft computing methods been used in time series analysis. One of these methods is fuzzy logic system. In this paper, we will try to implement fuzzy logic system to predict a non-stationary time series data. The data we use here is Mackey-Glass chaotic time series. We also use MATLAB software to predict the time series data, which have been divided into four groups of input-output pairs. These groups then will be used as the input variables of the fuzzy logic system. There are two scenarios been used in this paper, first is by using seven fuzzy sets, and second is by using fifteen fuzzy sets. The result shows that the fuzzy system with fifteen fuzzy sets give a better forecasting result than the fuzzy system with seven fuzzy sets. Index Terms—forecasting, fuzzy logic, Mackey-Glass chaotic, MATLAB, time series analysis


2021 ◽  
Vol 13 (3) ◽  
pp. 1187
Author(s):  
Bokyong Shin ◽  
Mikko Rask

Online deliberation research has recently developed automated indicators to assess the deliberative quality of much user-generated online data. While most previous studies have developed indicators based on content analysis and network analysis, time-series data and associated methods have been studied less thoroughly. This article contributes to the literature by proposing indicators based on a combination of network analysis and time-series analysis, arguing that it will help monitor how online deliberation evolves. Based on Habermasian deliberative criteria, we develop six throughput indicators and demonstrate their applications in the OmaStadi participatory budgeting project in Helsinki, Finland. The study results show that these indicators consist of intuitive figures and visualizations that will facilitate collective intelligence on ongoing processes and ways to solve problems promptly.


2016 ◽  
Vol 50 (1) ◽  
pp. 41-57 ◽  
Author(s):  
Linghe Huang ◽  
Qinghua Zhu ◽  
Jia Tina Du ◽  
Baozhen Lee

Purpose – Wiki is a new form of information production and organization, which has become one of the most important knowledge resources. In recent years, with the increase of users in wikis, “free rider problem” has been serious. In order to motivate editors to contribute more to a wiki system, it is important to fully understand their contribution behavior. The purpose of this paper is to explore the law of dynamic contribution behavior of editors in wikis. Design/methodology/approach – After developing a dynamic model of contribution behavior, the authors employed both the metrological and clustering methods to process the time series data. The experimental data were collected from Baidu Baike, a renowned Chinese wiki system similar to Wikipedia. Findings – There are four categories of editors: “testers,” “dropouts,” “delayers” and “stickers.” Testers, who contribute the least content and stop contributing rapidly after editing a few articles. After editing a large amount of content, dropouts stop contributing completely. Delayers are the editors who do not stop contributing during the observation time, but they may stop contributing in the near future. Stickers, who keep contributing and edit the most content, are the core editors. In addition, there are significant time-of-day and holiday effects on the number of editors’ contributions. Originality/value – By using the method of time series analysis, some new characteristics of editors and editor types were found. Compared with the former studies, this research also had a larger sample. Therefore, the results are more scientific and representative and can help managers to better optimize the wiki systems and formulate incentive strategies for editors.


2013 ◽  
Vol 10 (83) ◽  
pp. 20130048 ◽  
Author(s):  
Ben D. Fulcher ◽  
Max A. Little ◽  
Nick S. Jones

The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines.


Author(s):  
Seng Hansun ◽  
Subanar Subanar

      Abstract— Recently, many soft computing methods have been used and implemented in time series analysis. One of the methods is fuzzy hybrid model which has been designed and developed to improve the accuracy of time series prediction.      Popoola has developed a fuzzy hybrid model which using wavelet transformation as a pre-processing tool, and commonly known as fuzzy-wavelet method. In this thesis, a new approach of fuzzy-wavelet method has been introduced. If in Popoola’s fuzzy-wavelet, a fuzzy inference system is built for each decomposition data, then on the new approach only two fuzzy inference systems will be needed. By that way, the computation needed in time series analysis can be pressed.      The research is continued by making new software that can be used to analyze any given time series data based on the forecasting method applied. As a comparison there are three forecasting methods implemented on the software, i.e. fuzzy conventional method, Popoola’s fuzzy-wavelet, and the new approach of fuzzy-wavelet method. The software can be used in short-term forecasting (single-step forecast) and long-term forecasting. There are some limitation to the software, i.e. maximum data can be predicted is 300, maximum interval can be built is 7, and maximum transformation level can be used is 10. Furthermore, the accuracy and robustness of the proposed method will be compared to the other forecasting methods, so that can give us a brief description about the accuracy and robustness of the proposed method. Keywords—  fuzzy, wavelet, time series, soft computing


2008 ◽  
Vol 88 (9) ◽  
pp. 1022-1033 ◽  
Author(s):  
Shohei Ohgi ◽  
Satoru Morita ◽  
Kek Khee Loo ◽  
Chihiro Mizuike

Background and Purpose Comparisons of spontaneous movements of premature infants with brain injuries and those without brain injuries can provide insights into normal and abnormal processes in the ontogeny of motor development. In this study, the characteristics of spontaneous upper-extremity movements of premature infants with brain injuries and those without brain injuries were examined with time series analysis. Subjects Participants were 7 premature infants with brain injuries and 7 matched, low-risk, premature infants at the age of 1 month after term. Methods A triaxial accelerometer was used to measure upper-extremity limb acceleration in 3-dimensional space. Acceleration signals were recorded from the right wrist when the infant was in an active, alert state and lying in the supine position. The recording time was 200 seconds. The acceleration signal was sampled at a rate of 200 Hz. The acceleration time series data were analyzed by nonlinear analysis as well as linear analysis. Results The nonlinear time series analysis indicated that spontaneous movements of premature infants have nonlinear, chaotic, dynamic characteristics. The movements of the infants with brain injuries were characterized by larger dimensionality, and they were more unstable and unpredictable than those of infants without brain injuries. Discussion and Conclusion As determined by nonlinear analysis, the spontaneous movements of the premature infants with brain injuries had the characteristics of increased disorganization compared with those of the infants without brain injuries. Infants with brain injuries may manifest problems with self-organization as a function of the coordination of subsystems. Physical therapists should be able to support interactions among the subsystems and promote self-organization of motor learning through the individualized provision of various sensorimotor experiences for infants.


1986 ◽  
Vol 2 (3) ◽  
pp. 331-349 ◽  
Author(s):  
John J. Beggs

This article proposes the use of spectral methods to pool cross-sectional replications (N) of time series data (T) for time series analysis. Spectral representations readily suggest a weighting scheme to pool the data. The asymptotically desirable properties of the resulting estimators seem to translate satisfactorily into samples as small as T = 25 with N = 5. Simulation results, Monte Carlo results, and an empirical example help confirm this finding. The article concludes that there are many empirical situations where spectral methods canbe used where they were previously eschewed.


Author(s):  
Pēteris Grabusts ◽  
Arkady Borisov

Clustering Methodology for Time Series MiningA time series is a sequence of real data, representing the measurements of a real variable at time intervals. Time series analysis is a sufficiently well-known task; however, in recent years research has been carried out with the purpose to try to use clustering for the intentions of time series analysis. The main motivation for representing a time series in the form of clusters is to better represent the main characteristics of the data. The central goal of the present research paper was to investigate clustering methodology for time series data mining, to explore the facilities of time series similarity measures and to use them in the analysis of time series clustering results. More complicated similarity measures include Longest Common Subsequence method (LCSS). In this paper, two tasks have been completed. The first task was to define time series similarity measures. It has been established that LCSS method gives better results in the detection of time series similarity than the Euclidean distance. The second task was to explore the facilities of the classical k-means clustering algorithm in time series clustering. As a result of the experiment a conclusion has been drawn that the results of time series clustering with the help of k-means algorithm correspond to the results obtained with LCSS method, thus the clustering results of the specific time series are adequate.


Sign in / Sign up

Export Citation Format

Share Document