Multivariate Time Series Decomposition into Oscillation Components

2017 ◽  
Vol 29 (8) ◽  
pp. 2055-2075 ◽  
Author(s):  
Takeru Matsuda ◽  
Fumiyasu Komaki

Many time series are considered to be a superposition of several oscillation components. We have proposed a method for decomposing univariate time series into oscillation components and estimating their phases (Matsuda & Komaki, 2017 ). In this study, we extend that method to multivariate time series. We assume that several oscillators underlie the given multivariate time series and that each variable corresponds to a superposition of the projections of the oscillators. Thus, the oscillators superpose on each variable with amplitude and phase modulation. Based on this idea, we develop gaussian linear state-space models and use them to decompose the given multivariate time series. The model parameters are estimated from data using the empirical Bayes method, and the number of oscillators is determined using the Akaike information criterion. Therefore, the proposed method extracts underlying oscillators in a data-driven manner and enables investigation of phase dynamics in a given multivariate time series. Numerical results show the effectiveness of the proposed method. From monthly mean north-south sunspot number data, the proposed method reveals an interesting phase relationship.

2017 ◽  
Vol 29 (2) ◽  
pp. 332-367 ◽  
Author(s):  
Takeru Matsuda ◽  
Fumiyasu Komaki

Many time series are naturally considered as a superposition of several oscillation components. For example, electroencephalogram (EEG) time series include oscillation components such as alpha, beta, and gamma. We propose a method for decomposing time series into such oscillation components using state-space models. Based on the concept of random frequency modulation, gaussian linear state-space models for oscillation components are developed. In this model, the frequency of an oscillator fluctuates by noise. Time series decomposition is accomplished by this model like the Bayesian seasonal adjustment method. Since the model parameters are estimated from data by the empirical Bayes’ method, the amplitudes and the frequencies of oscillation components are determined in a data-driven manner. Also, the appropriate number of oscillation components is determined with the Akaike information criterion (AIC). In this way, the proposed method provides a natural decomposition of the given time series into oscillation components. In neuroscience, the phase of neural time series plays an important role in neural information processing. The proposed method can be used to estimate the phase of each oscillation component and has several advantages over a conventional method based on the Hilbert transform. Thus, the proposed method enables an investigation of the phase dynamics of time series. Numerical results show that the proposed method succeeds in extracting intermittent oscillations like ripples and detecting the phase reset phenomena. We apply the proposed method to real data from various fields such as astronomy, ecology, tidology, and neuroscience.


Information ◽  
2020 ◽  
Vol 11 (6) ◽  
pp. 288
Author(s):  
Kuiyong Song ◽  
Nianbin Wang ◽  
Hongbin Wang

High-dimensional time series classification is a serious problem. A similarity measure based on distance is one of the methods for time series classification. This paper proposes a metric learning-based univariate time series classification method (ML-UTSC), which uses a Mahalanobis matrix on metric learning to calculate the local distance between multivariate time series and combines Dynamic Time Warping(DTW) and the nearest neighbor classification to achieve the final classification. In this method, the features of the univariate time series are presented as multivariate time series data with a mean value, variance, and slope. Next, a three-dimensional Mahalanobis matrix is obtained based on metric learning in the data. The time series is divided into segments of equal intervals to enable the Mahalanobis matrix to more accurately describe the features of the time series data. Compared with the most effective measurement method, the related experimental results show that our proposed algorithm has a lower classification error rate in most of the test datasets.


Author(s):  
Marisa Mohr ◽  
Florian Wilhelm ◽  
Ralf Möller

The estimation of the qualitative behaviour of fractional Brownian motion is an important topic for modelling real-world applications. Permutation entropy is a well-known approach to quantify the complexity of univariate time series in a scalar-valued representation. As an extension often used for outlier detection, weighted permutation entropy takes amplitudes within time series into account. As many real-world problems deal with multivariate time series, these measures need to be extended though. First, we introduce multivariate weighted permutation entropy, which is consistent with standard multivariate extensions of permutation entropy. Second, we investigate the behaviour of weighted permutation entropy on both univariate and multivariate fractional Brownian motion and show revealing results.


Energies ◽  
2020 ◽  
Vol 13 (9) ◽  
pp. 2370 ◽  
Author(s):  
Tuukka Salmi ◽  
Jussi Kiljander ◽  
Daniel Pakkala

This paper presents a novel deep learning architecture for short-term load forecasting of building energy loads. The architecture is based on a simple base learner and multiple boosting systems that are modelled as a single deep neural network. The architecture transforms the original multivariate time series into multiple cascading univariate time series. Together with sparse interactions, parameter sharing and equivariant representations, this approach makes it possible to combat against overfitting while still achieving good presentation power with a deep network architecture. The architecture is evaluated in several short-term load forecasting tasks with energy data from an office building in Finland. The proposed architecture outperforms state-of-the-art load forecasting model in all the tasks.


2021 ◽  
pp. 1-38
Author(s):  
Hongxuan Yan ◽  
Gareth W. Peters ◽  
Jennifer Chan

Abstract Mortality projection and forecasting of life expectancy are two important aspects of the study of demography and life insurance modelling. We demonstrate in this work the existence of long memory in mortality data. Furthermore, models incorporating long memory structure provide a new approach to enhance mortality forecasts in terms of accuracy and reliability, which can improve the understanding of mortality. Novel mortality models are developed by extending the Lee–Carter (LC) model for death counts to incorporate a long memory time series structure. To link our extensions to existing actuarial work, we detail the relationship between the classical models of death counts developed under a Generalised Linear Model (GLM) formulation and the extensions we propose that are developed under an extension to the GLM framework known in time series literature as the Generalised Linear Autoregressive Moving Average (GLARMA) regression models. Bayesian inference is applied to estimate the model parameters. The Deviance Information Criterion (DIC) is evaluated to select between different LC model extensions of our proposed models in terms of both in-sample fits and out-of-sample forecasts performance. Furthermore, we compare our new models against existing models structures proposed in the literature when applied to the analysis of death count data sets from 16 countries divided according to genders and age groups. Estimates of mortality rates are applied to calculate life expectancies when constructing life tables. By comparing different life expectancy estimates, results show the LC model without the long memory component may provide underestimates of life expectancy, while the long memory model structure extensions reduce this effect. In summary, it is valuable to investigate how the long memory feature in mortality influences life expectancies in the construction of life tables.


2018 ◽  
Vol 7 (3.23) ◽  
pp. 32
Author(s):  
Ahmad Fauzi Raffee ◽  
Siti Nazahiyah Rahmat ◽  
Hazrul Abdul Hamid ◽  
Muhammad Ismail Jaffar

In the attempt to increase the production of the industrial sector to accommodate human needs; motor vehicles and power plants have led to the decline of air quality. The tremendous decline of air pollution levels can adversely affect human health, especially children, those elderly, as well as patients suffering from asthma and respiratory problems. As such, the air pollution modelling appears to be an important tool to help the local authorities in giving early warning, apart from functioning as a guide to develop policies in near future. Hence, in order to predict the concentration of air pollutants that involves multiple parameters, both artificial neural network (ANN) and principal component regression (PCR) have been widely used, in comparison to classical multivariate time series. Besides, this paper also presents comprehensive literature on univariate time series modelling. Overall, the classical multivariate time series modelling has to be further investigated so as to overcome the limitations of ANN and PCR, including univariate time series methods in short-term prediction of air pollutant concentrations.  


2021 ◽  
Vol 5 (1) ◽  
pp. 20
Author(s):  
Alexander Dorndorf ◽  
Boris Kargoll ◽  
Jens-André Paffenholz ◽  
Hamza Alkhatib

Many geodetic measurement data can be modelled as a multivariate time series consisting of a deterministic (“functional”) model describing the trend, and a stochastic model of the correlated noise. These data are also often affected by outliers and their stochastic properties can vary significantly. The functional model of the time series is usually nonlinear regarding the trend parameters. To deal with these characteristics, a time series model, which can generally be explained as the additive combination of a multivariate, nonlinear regression model with multiple univariate, covariance-stationary autoregressive (AR) processes the white noise components of which obey independent, scaled t-distributions, was proposed by the authors in previous research papers. In this paper, we extend the aforementioned model to include prior knowledge regarding various model parameters, the information about which is often available in practical situations. We develop an algorithm based on Bayesian inference that provides a robust and reliable estimation of the functional parameters, the coefficients of the AR process and the parameters of the underlying t-distribution. We approximate the resulting posterior density using Markov chain Monte Carlo (MCMC) techniques consisting of a Metropolis-within-Gibbs algorithm.


2016 ◽  
Vol 63 (3) ◽  
pp. 255-272
Author(s):  
Łukasz Lenart ◽  
Błażej Mazur

The goal of the paper is to discuss Bayesian estimation of a class of univariate time-series models being able to represent complicated patterns of “cyclical” fluctuations in mean function. We highlight problems that arise in Bayesian estimation of parametric time-series model using the Flexible Fourier Form of Gallant (1981). We demonstrate that the resulting posterior is likely to be highly multimodal, therefore standard Markov Chain Monte Carlo (MCMC in short) methods might fail to explore the whole posterior, especially when the modes are separated. We show that the multimodality is actually an issue using the exact solution (i.e. an analytical marginal posterior) in an approximate model. We address that problem using two essential steps. Firstly, we integrate the posterior with respect to amplitude parameters, which can be carried out analytically. Secondly, we propose a non-parametrically motivated proposal for the frequency parameters. This allows for construction of an improved MCMC sampler that effectively explores the space of all the model parameters, with the amplitudes sampled by the direct approach outside the MCMC chain. We illustrate the problem using simulations and demonstrate our solution using two real-data examples.


Sign in / Sign up

Export Citation Format

Share Document