scholarly journals The reliability of conditional Granger causality analysis in the time domain

Author(s):  
Raffaella Franciotti ◽  
Nicola Walter Falasca

Background. Brain function requires a coordinated flow of information among functionally specialized areas. Quantitative methods provide a multitude of metrics to quantify the oscillatory interactions measured by invasive or non-invasive recording techniques. Granger causality (G-causality) has emerged as a useful tool to investigate the directions of information flows, but challenges remain on the ability of G-causality when applying on biological data. In addition it is not clear if G-causality can distinguish between direct and indirect influences and if G-causality reliability was related to the strength of the neural interactions. Methods. In this study time domain G-causality connectivity analysis was performed on simulated electrophysiological signals. A network of 19 nodes was constructed with a designed structure of direct and indirect information flows among nodes, which we referred to as a ground truth structure. G-causality reliability was evaluated on two sets of simulated data while varying one of the following variables: the number of time points in the temporal window, the lags between causally interacting nodes, the connection strength between the links, and the noise. Results. Results showed that the number of time points in the temporal window affects G-causality reliability substantially. A large number of time points could decrease the reliability of the G-causality results, increasing the number of false positive (type I errors). In the presence of stationary signals, G-causality results are reliable showing all true positive links (absence of type II errors), when the underlying structure has the delays between the interacting nodes lower than 100 ms, the connection strength higher to 0.1 time the amplitude of the driver signal and good signal to noise ratio. Finally, indirect links were revealed by G-causality analysis for connection strength higher than the direct link and lags lower than the direct link. Discussion. Conditional multivariate vector autoregressive model was applied to 19 virtual time series to estimate the reliability of the G-causality analysis on the identification of the true positive link, on the presence of spurious links and on the effects of indirect links. Simulated data revealed that weak direct but not weak indirect causal effects could be identified by G-causality analysis. These results demonstrate a good sensitivity and specificity of the conditional G-causality analysis in the time domain when applied on covariance stationary, non-correlated electrophysiological signals.

2018 ◽  
Author(s):  
Raffaella Franciotti ◽  
Nicola Walter Falasca

Background. Brain function requires a coordinated flow of information among functionally specialized areas. Quantitative methods provide a multitude of metrics to quantify the oscillatory interactions measured by invasive or non-invasive recording techniques. Granger causality (G-causality) has emerged as a useful tool to investigate the directions of information flows, but challenges remain on the ability of G-causality when applying on biological data. In addition it is not clear if G-causality can distinguish between direct and indirect influences and if G-causality reliability was related to the strength of the neural interactions. Methods. In this study time domain G-causality connectivity analysis was performed on simulated electrophysiological signals. A network of 19 nodes was constructed with a designed structure of direct and indirect information flows among nodes, which we referred to as a ground truth structure. G-causality reliability was evaluated on two sets of simulated data while varying one of the following variables: the number of time points in the temporal window, the lags between causally interacting nodes, the connection strength between the links, and the noise. Results. Results showed that the number of time points in the temporal window affects G-causality reliability substantially. A large number of time points could decrease the reliability of the G-causality results, increasing the number of false positive (type I errors). In the presence of stationary signals, G-causality results are reliable showing all true positive links (absence of type II errors), when the underlying structure has the delays between the interacting nodes lower than 100 ms, the connection strength higher to 0.1 time the amplitude of the driver signal and good signal to noise ratio. Finally, indirect links were revealed by G-causality analysis for connection strength higher than the direct link and lags lower than the direct link. Discussion. Conditional multivariate vector autoregressive model was applied to 19 virtual time series to estimate the reliability of the G-causality analysis on the identification of the true positive link, on the presence of spurious links and on the effects of indirect links. Simulated data revealed that weak direct but not weak indirect causal effects could be identified by G-causality analysis. These results demonstrate a good sensitivity and specificity of the conditional G-causality analysis in the time domain when applied on covariance stationary, non-correlated electrophysiological signals.


2020 ◽  
Vol 23 (2) ◽  
pp. 121-124
Author(s):  
N. W. Falasca ◽  
R. Franciotti

Granger causality (G-causality) has emerged as a useful tool to investigate the influence that one system can exert over another system, but challenges remain when applying it to biological data. Specifically, it is not clear if G-causality can distinguish between direct and indirect influences. In this study time domain G-causality connectivity analysis was performed on simulated electroencephalographic cerebral signals. Conditional multivariate autoregressive model was applied to 19 virtual time series (nodes) to identify the effects of direct and indirect links while varying one of the following variables: the length of the time series, the lags between interacting nodes, the connection strength of the links, and the noise. Simulated data revealed that weak indirect influences are not identified by Gcausality analysis when applied on covariance stationary, non-correlated electrophysiological time series.


2020 ◽  
Author(s):  
Shreedhar Savant Todkar ◽  
Vincent Baltazart ◽  
Amine Ihamouten ◽  
Xavier Dérobert ◽  
Jean-Michel Simonin

<p>In the field of pavement monitoring, Ground Penetrating Radar (GPR) methods are gaining prominence due to their ability to perform non-destructive testing of the subsurface. In this context, the detection and characterization of subsurface debondings at an early stage is recommended to avoid further degradation and to maintain the lifespan of these structures. To mitigate the limited time resolution of the conventional GPR devices, this paper develops the detection of thin debonding (of millimeter-order) by monitoring small changes in the time domain GPR data by specific data processing techniques (with certain automatic capabilities).</p><p>In this paper, we propose to use the supervised machine learning method, namely Two-class Support Vector Machines (SVM) to achieve the objectives. In addition, by means of time domain GPR signal features, we aim at reducing the computational burden and also increase the efficiency of SVM. The method is implemented to process independent 1D GPR A-scan data.</p><p>Furthermore, the performance assessment of Two-class SVM is carried out on both simulated and field data by means of Sensitivity Analysis to identify the parameters that affect its performance. While simulated data is generated using the analytic Fresnel data model, the field data are UWB Stepped-Frequency GPR (SF-GPR) data which were collected over artificially embedded debondings. The data was acquired during the Accelerated Pavement Tests (APTs) conducted at the IFSTTAR's fatigue carousel to survey debonding growth in the defect-affected zones at various stages of fatigue.</p><p>Two-class SVM presented the ability to detect thin millimetric debondings. Whereas, sensitivity analysis demonstrated a quick and efficient way to assess the pavement conditions.</p>


2007 ◽  
Vol 2007 ◽  
pp. 1-13 ◽  
Author(s):  
Ronald Phlypo ◽  
Paul Boon ◽  
Yves D'Asseler ◽  
Ignace Lemahieu

To cope with the severe masking of background cerebral activity in the electroencephalogram (EEG) by ocular movement artefacts, we present a method which combines lower-order, short-term and higher-order, long-term statistics. The joint smoothened subspace estimator (JSSE) calculates the joint information in both statistical models, subject to the constraint that the resulting estimated source should be sufficiently smooth in the time domain (i.e., has a large autocorrelation or self predictive power). It is shown that the JSSE is able to estimate a component from simulated data that is superior with respect to methodological artefact suppression to those of FastICA, SOBI, pSVD, or JADE/COM1 algorithms used for blind source separation (BSS). Interference and distortion suppression are of comparable order when compared with the above-mentioned methods. Results on patient data demonstrate that the method is able to suppress blinking and saccade artefacts in a fully automated way.


2013 ◽  
Vol 109 (2) ◽  
pp. 591-602 ◽  
Author(s):  
J. Lucas McKay ◽  
Torrence D. J. Welch ◽  
Brani Vidakovic ◽  
Lena H. Ting

We developed wavelet-based functional ANOVA (wfANOVA) as a novel approach for comparing neurophysiological signals that are functions of time. Temporal resolution is often sacrificed by analyzing such data in large time bins, increasing statistical power by reducing the number of comparisons. We performed ANOVA in the wavelet domain because differences between curves tend to be represented by a few temporally localized wavelets, which we transformed back to the time domain for visualization. We compared wfANOVA and ANOVA performed in the time domain (tANOVA) on both experimental electromyographic (EMG) signals from responses to perturbation during standing balance across changes in peak perturbation acceleration (3 levels) and velocity (4 levels) and on simulated data with known contrasts. In experimental EMG data, wfANOVA revealed the continuous shape and magnitude of significant differences over time without a priori selection of time bins. However, tANOVA revealed only the largest differences at discontinuous time points, resulting in features with later onsets and shorter durations than those identified using wfANOVA ( P < 0.02). Furthermore, wfANOVA required significantly fewer (∼¼×; P < 0.015) significant F tests than tANOVA, resulting in post hoc tests with increased power. In simulated EMG data, wfANOVA identified known contrast curves with a high level of precision ( r2 = 0.94 ± 0.08) and performed better than tANOVA across noise levels ( P < <0.01). Therefore, wfANOVA may be useful for revealing differences in the shape and magnitude of neurophysiological signals (e.g., EMG, firing rates) across multiple conditions with both high temporal resolution and high statistical power.


1992 ◽  
Vol 2 (4) ◽  
pp. 615-620
Author(s):  
G. W. Series
Keyword(s):  

2018 ◽  
Vol 12 (7-8) ◽  
pp. 76-83
Author(s):  
E. V. KARSHAKOV ◽  
J. MOILANEN

Тhe advantage of combine processing of frequency domain and time domain data provided by the EQUATOR system is discussed. The heliborne complex has a towed transmitter, and, raised above it on the same cable a towed receiver. The excitation signal contains both pulsed and harmonic components. In fact, there are two independent transmitters operate in the system: one of them is a normal pulsed domain transmitter, with a half-sinusoidal pulse and a small "cut" on the falling edge, and the other one is a classical frequency domain transmitter at several specially selected frequencies. The received signal is first processed to a direct Fourier transform with high Q-factor detection at all significant frequencies. After that, in the spectral region, operations of converting the spectra of two sounding signals to a single spectrum of an ideal transmitter are performed. Than we do an inverse Fourier transform and return to the time domain. The detection of spectral components is done at a frequency band of several Hz, the receiver has the ability to perfectly suppress all sorts of extra-band noise. The detection bandwidth is several dozen times less the frequency interval between the harmonics, it turns out thatto achieve the same measurement quality of ground response without using out-of-band suppression you need several dozen times higher moment of airborne transmitting system. The data obtained from the model of a homogeneous half-space, a two-layered model, and a model of a horizontally layered medium is considered. A time-domain data makes it easier to detect a conductor in a relative insulator at greater depths. The data in the frequency domain gives more detailed information about subsurface. These conclusions are illustrated by the example of processing the survey data of the Republic of Rwanda in 2017. The simultaneous inversion of data in frequency domain and time domain can significantly improve the quality of interpretation.


2019 ◽  
Vol 629 ◽  
pp. A112 ◽  
Author(s):  
B. M. Giuliano ◽  
A. A. Gavdush ◽  
B. Müller ◽  
K. I. Zaytsev ◽  
T. Grassi ◽  
...  

Context. Reliable, directly measured optical properties of astrophysical ice analogues in the infrared and terahertz (THz) range are missing from the literature. These parameters are of great importance to model the dust continuum radiative transfer in dense and cold regions, where thick ice mantles are present, and are necessary for the interpretation of future observations planned in the far-infrared region. Aims. Coherent THz radiation allows for direct measurement of the complex dielectric function (refractive index) of astrophysically relevant ice species in the THz range. Methods. We recorded the time-domain waveforms and the frequency-domain spectra of reference samples of CO ice, deposited at a temperature of 28.5 K and annealed to 33 K at different thicknesses. We developed a new algorithm to reconstruct the real and imaginary parts of the refractive index from the time-domain THz data. Results. The complex refractive index in the wavelength range 1 mm–150 μm (0.3–2.0 THz) was determined for the studied ice samples, and this index was compared with available data found in the literature. Conclusions. The developed algorithm of reconstructing the real and imaginary parts of the refractive index from the time-domain THz data enables us, for the first time, to determine the optical properties of astrophysical ice analogues without using the Kramers–Kronig relations. The obtained data provide a benchmark to interpret the observational data from current ground-based facilities as well as future space telescope missions, and we used these data to estimate the opacities of the dust grains in presence of CO ice mantles.


Sign in / Sign up

Export Citation Format

Share Document