human eeg
Recently Published Documents


TOTAL DOCUMENTS

394
(FIVE YEARS 33)

H-INDEX

54
(FIVE YEARS 2)

2022 ◽  
Author(s):  
Kumari Liza ◽  
Supratim Ray

Steady-state visually evoked potentials (SSVEP) are widely used to index top-down cognitive processing in human electroencephalogram (EEG) studies. Typically, two stimuli flickering at different temporal frequencies (TFs) are presented, each producing a distinct response in the EEG at its flicker frequency. However, how SSVEP responses in EEG are modulated in the presence of a competing flickering stimulus just due to sensory interactions is not well understood. We have previously shown in local field potentials (LFP) recorded from awake monkeys that when two overlapping full screen gratings are counter-phased at different TFs, there is an asymmetric SSVEP response suppression, with greater suppression from lower TFs, which further depends on the relative orientations of the gratings (stronger suppression and asymmetry for parallel compared to orthogonal gratings). Here, we first confirmed these effects in both male and female human EEG recordings. Then, we mapped the response suppression of one stimulus (target) by a competing stimulus (mask) over a much wider range than the previous study. Surprisingly, we found that the suppression was not stronger at low frequencies in general, but systematically varied depending on the target TF, indicating local interactions between the two competing stimuli. These results were confirmed in both human EEG and monkey LFP and electrocorticogram (ECoG) data. Our results show that sensory interactions between multiple SSVEPs are more complex than shown previously and are influenced by both local and global factors, underscoring the need to cautiously interpret the results of studies involving SSVEP paradigms.


Author(s):  
Carlos M. Gómez ◽  
Brenda Y. Angulo-Ruíz ◽  
Vanesa Muñoz ◽  
Elena I. Rodriguez-Martínez

AbstractThe ubiquitous brain oscillations occur in bursts of oscillatory activity. The present report tries to define the statistical characteristics of electroencephalographical (EEG) bursts of oscillatory activity during resting state in humans to define (i) the statistical properties of amplitude and duration of oscillatory bursts, (ii) its possible correlation, (iii) its frequency content, and (iv) the presence or not of a fixed threshold to trigger an oscillatory burst. The open eyes EEG recordings of five subjects with no artifacts were selected from a sample of 40 subjects. The recordings were filtered in frequency ranges of 2 Hz wide from 1 to 99 Hz. The analytic Hilbert transform was computed to obtain the amplitude envelopes of oscillatory bursts. The criteria of thresholding and a minimum of three cycles to define an oscillatory burst were imposed. Amplitude and duration parameters were extracted and they showed durations between hundreds of milliseconds and a few seconds, and peak amplitudes showed a unimodal distribution. Both parameters were positively correlated and the oscillatory burst durations were explained by a linear model with the terms peak amplitude and peak amplitude of amplitude envelope time derivative. The frequency content of the amplitude envelope was contained in the 0–2 Hz range. The results suggest the presence of amplitude modulated continuous oscillations in the human EEG during the resting conditions in a broad frequency range, with durations in the range of few seconds and modulated positively by amplitude and negatively by the time derivative of the amplitude envelope suggesting activation-inhibition dynamics. This macroscopic oscillatory network behavior is less pronounced in the low-frequency range (1–3 Hz).


2021 ◽  
Vol 69 ◽  
pp. 102955
Author(s):  
Maedeh Najafi Ashtiani ◽  
Mohammed N. Ashtiani ◽  
Mohammadreza Asghari Oskoei

2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Colin W. Hoy ◽  
Sheila C. Steiner ◽  
Robert T. Knight

AbstractLearning signals during reinforcement learning and cognitive control rely on valenced reward prediction errors (RPEs) and non-valenced salience prediction errors (PEs) driven by surprise magnitude. A core debate in reward learning focuses on whether valenced and non-valenced PEs can be isolated in the human electroencephalogram (EEG). We combine behavioral modeling and single-trial EEG regression to disentangle sequential PEs in an interval timing task dissociating outcome valence, magnitude, and probability. Multiple regression across temporal, spatial, and frequency dimensions characterized a spatio-tempo-spectral cascade from early valenced RPE value to non-valenced RPE magnitude, followed by outcome probability indexed by a late frontal positivity. Separating negative and positive outcomes revealed the valenced RPE value effect is an artifact of overlap between two non-valenced RPE magnitude responses: frontal theta feedback-related negativity on losses and posterior delta reward positivity on wins. These results reconcile longstanding debates on the sequence of components representing reward and salience PEs in the human EEG.


2021 ◽  
Vol 15 ◽  
Author(s):  
Saeedeh Hashemnia ◽  
Lukas Grasse ◽  
Shweta Soni ◽  
Matthew S. Tata

Recent deep-learning artificial neural networks have shown remarkable success in recognizing natural human speech, however the reasons for their success are not entirely understood. Success of these methods might be because state-of-the-art networks use recurrent layers or dilated convolutional layers that enable the network to use a time-dependent feature space. The importance of time-dependent features in human cortical mechanisms of speech perception, measured by electroencephalography (EEG) and magnetoencephalography (MEG), have also been of particular recent interest. It is possible that recurrent neural networks (RNNs) achieve their success by emulating aspects of cortical dynamics, albeit through very different computational mechanisms. In that case, we should observe commonalities in the temporal dynamics of deep-learning models, particularly in recurrent layers, and brain electrical activity (EEG) during speech perception. We explored this prediction by presenting the same sentences to both human listeners and the Deep Speech RNN and considered the temporal dynamics of the EEG and RNN units for identical sentences. We tested whether the recently discovered phenomenon of envelope phase tracking in the human EEG is also evident in RNN hidden layers. We furthermore predicted that the clustering of dissimilarity between model representations of pairs of stimuli would be similar in both RNN and EEG dynamics. We found that the dynamics of both the recurrent layer of the network and human EEG signals exhibit envelope phase tracking with similar time lags. We also computed the representational distance matrices (RDMs) of brain and network responses to speech stimuli. The model RDMs became more similar to the brain RDM when going from early network layers to later ones, and eventually peaked at the recurrent layer. These results suggest that the Deep Speech RNN captures a representation of temporal features of speech in a manner similar to human brain.


2021 ◽  
Vol 103 (2) ◽  
Author(s):  
Oleg E. Karpov ◽  
Vadim V. Grubov ◽  
Vladimir A. Maksimenko ◽  
Nikita Utaschev ◽  
Viachaslav E. Semerikov ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document