scholarly journals Granger Causality Inference in EEG Source Connectivity Analysis: A State-Space Approach*

2020 ◽  
Author(s):  
Parinthorn Manomaisaowapak ◽  
Anawat Nartkulpat ◽  
Jitkomut Songsiri

AbstractThis paper considers a problem of estimating brain effective connectivity from EEG signals using a Granger causality (GC) concept characterized on state-space models. We propose a state-space model for explaining coupled dynamics of the source and EEG signals where EEG is a linear combination of sources according to the characteristics of volume conduction. Our formulation has a sparsity prior on the source output matrix that can further classify active and inactive sources. The scheme is comprised of two main steps: model estimation and model inference to estimate brain connectivity. The model estimation consists of performing a subspace identification and the active source selection based on a group-norm regularized least-squares. The model inference relies on the concept of state-space GC that requires solving a discrete-time Riccati equation for the covariance of estimation error. We verify the performance on simulated data sets that represent realistic human brain activities under several conditions including percentages of active sources, a number of EEG electrodes and the location of active sources. The performance of estimating brain networks is compared with a two-stage approach using source reconstruction algorithms and VAR-based Granger analysis. Our method achieved better performances than the two-stage approach under the assumptions that the true source dynamics are sparse and generated from state-space models. The method is applied to a real EEG SSVEP data set and we found that the temporal lobe played a role of a mediator of connections between temporal and occipital areas, which agreed with findings in previous studies.

2015 ◽  
Vol 91 (4) ◽  
Author(s):  
Lionel Barnett ◽  
Anil K. Seth

2000 ◽  
Vol 12 (4) ◽  
pp. 831-864 ◽  
Author(s):  
Zoubin Ghahramani ◽  
Geoffrey E. Hinton

We introduce a new statistical model for time series that iteratively segments data into regimes with approximately linear dynamics and learns the parameters of each of these linear regimes. This model combines and generalizes two of the most widely used stochastic time-series models—hidden Markov models and linear dynamical systems—and is closely related to models that are widely used in the control and econometrics literatures. It can also be derived by extending the mixture of experts neural network (Jacobs, Jordan, Nowlan, & Hinton, 1991) to its fully dynamical version, in which both expert and gating networks are recurrent. Inferring the posterior probabilities of the hidden states of this model is computationally intractable, and therefore the exact expectation maximization (EM) algorithm cannot be applied. However, we present a variational approximation that maximizes a lower bound on the log-likelihood and makes use of both the forward and backward recursions for hidden Markov models and the Kalman filter recursions for linear dynamical systems. We tested the algorithm on artificial data sets and a natural data set of respiration force from a patient with sleep apnea. The results suggest that variational approximations are a viable method for inference and learning in switching state-space models.


2009 ◽  
Vol 129 (12) ◽  
pp. 1187-1194 ◽  
Author(s):  
Jorge Ivan Medina Martinez ◽  
Kazushi Nakano ◽  
Kohji Higuchi

2008 ◽  
Vol 42 (6-8) ◽  
pp. 939-951 ◽  
Author(s):  
Tounsia Jamah ◽  
Rachid Mansouri ◽  
Saïd Djennoune ◽  
Maâmar Bettayeb

Sign in / Sign up

Export Citation Format

Share Document