An Algorithm for Time Series Decomposition Using State-Space Models with Singular Transition Matrix

Compstat ◽  
1988 ◽  
pp. 369-374
Author(s):  
E. Alvoni
2017 ◽  
Vol 29 (2) ◽  
pp. 332-367 ◽  
Author(s):  
Takeru Matsuda ◽  
Fumiyasu Komaki

Many time series are naturally considered as a superposition of several oscillation components. For example, electroencephalogram (EEG) time series include oscillation components such as alpha, beta, and gamma. We propose a method for decomposing time series into such oscillation components using state-space models. Based on the concept of random frequency modulation, gaussian linear state-space models for oscillation components are developed. In this model, the frequency of an oscillator fluctuates by noise. Time series decomposition is accomplished by this model like the Bayesian seasonal adjustment method. Since the model parameters are estimated from data by the empirical Bayes’ method, the amplitudes and the frequencies of oscillation components are determined in a data-driven manner. Also, the appropriate number of oscillation components is determined with the Akaike information criterion (AIC). In this way, the proposed method provides a natural decomposition of the given time series into oscillation components. In neuroscience, the phase of neural time series plays an important role in neural information processing. The proposed method can be used to estimate the phase of each oscillation component and has several advantages over a conventional method based on the Hilbert transform. Thus, the proposed method enables an investigation of the phase dynamics of time series. Numerical results show that the proposed method succeeds in extracting intermittent oscillations like ripples and detecting the phase reset phenomena. We apply the proposed method to real data from various fields such as astronomy, ecology, tidology, and neuroscience.


2005 ◽  
Vol 62 (9) ◽  
pp. 1937-1952 ◽  
Author(s):  
Perry de Valpine ◽  
Ray Hilborn

State-space models are commonly used to incorporate process and observation errors in analysis of fisheries time series. A gap in analysis methods has been the lack of classical likelihood methods for nonlinear state-space models. We evaluate a method that uses weighted kernel density estimates of Bayesian posterior samples to estimate likelihoods (Monte Carlo Kernel Likelihoods, MCKL). Classical likelihoods require integration over the state-space, and we compare MCKL to the widely used errors-in-variables (EV) method, which estimates states jointly with parameters by maximizing a nonintegrated likelihood. For a simulated, linear, autoregressive model and a Schaefer model fit to cape hake (Merluccius capensis × M. paradoxus) data, classical likelihoods outperform EV likelihoods, which give asymptotically biased parameter estimates and inaccurate confidence regions. Our results on the importance of integrated state-space likelihoods also support the value of Bayesian analysis with Monte Carlo posterior integration. Both approaches provide valuable insights and can be used complementarily. Previously, Bayesian analysis was the only option for incorporating process and observation errors with complex nonlinear models. The MCKL method provides a classical approach for such models, so that choice of analysis approach need not depend on model complexity.


Sign in / Sign up

Export Citation Format

Share Document