Quantification and visualization of event-related changes in oscillatory brain activity in the time–frequency domain

Author(s):  
Bernhard Graimann ◽  
Gert Pfurtscheller
2012 ◽  
Vol 25 (0) ◽  
pp. 192
Author(s):  
Davide Bottari ◽  
Sophie Rohlf ◽  
Marlene Hense ◽  
Boukje Habets ◽  
Brigitte Roeder

Event-related potentials (ERP) to the second stimulus of a pair are known to be reduced in amplitude. The magnitude of this ‘refractoriness’ is modulated by both the interstimulus interval and the similarity between the two stimuli. Intramodal refractoriness is interpreted as an index of a temporary decrement in neural responsiveness. So, cross-modal refractoriness might be an indicator of shared neural generators between modalities. We analysed oscillatory neuronal activity while participants were engaged in an oddball paradigm with auditory (4000 Hz, 50 ms-long, 90 db, bilateral) and tactile stimuli (50 ms-long, 125 Hz-vibrations, index fingers) presented in a random order with an ISI of either 1000 or 2000 ms. Participants were required to detect rare tactile (middle fingers) and auditory deviants (600 Hz). A time–frequency analysis of the brain response to the second stimulus of each pair (T-T, A-A, T-A and A-T) contrasting Short and Long ISIs revealed a reduced refractory effect after Long ISI with respect to Short ISI, in all pairs (both intramodal and cross-modal). This emerged as a broadly distributed increase of evoked theta activity (3–7 Hz, 100–500 ms). Only in intramodal tactile pairs and cross-modal tactile-auditory pairs we also observed that Long ISI with respect to Short ISI determined a decrease of induced alpha (8–12 Hz, 200–700 ms), a typical sign of enhanced neural excitability and thus decreased refractoriness. These data suggest that somatosensory and auditory cortices display different neural markers of refractoriness and that the auditory cortex might have a stronger low level degree of influence on the tactile cortex than vice-versa.


2012 ◽  
Vol 24 (2) ◽  
pp. 337-350 ◽  
Author(s):  
Álvaro Darriba ◽  
Paula Pazo-Álvarez ◽  
Almudena Capilla ◽  
Elena Amenedo

Despite the importance of change detection (CD) for visual perception and for performance in our environment, observers often miss changes that should be easily noticed. In the present study, we employed time–frequency analysis to investigate the neural activity associated with CD and change blindness (CB). Observers were presented with two successive visual displays and had to look for a change in orientation in any one of four sinusoid gratings between both displays. Theta power increased widely over the scalp after the second display when a change was consciously detected. Relative to no-change and CD, CB was associated with a pronounced theta power enhancement at parietal-occipital and occipital sites and broadly distributed alpha power suppression during the processing of the prechange display. Finally, power suppressions in the beta band following the second display show that, even when a change is not consciously detected, it might be represented to a certain degree. These results show the potential of time–frequency analysis to deepen our knowledge of the temporal curse of the neural events underlying CD. The results further reveal that the process resulting in CB begins even before the occurrence of the change itself.


2012 ◽  
Vol 107 (9) ◽  
pp. 2475-2484 ◽  
Author(s):  
Paolo Manganotti ◽  
Emanuela Formaggio ◽  
Silvia Francesca Storti ◽  
Daniele De Massari ◽  
Alessandro Zamboni ◽  
...  

Dynamic changes in spontaneous electroencephalogram (EEG) rhythms can be seen to occur with a high rate of variability. An innovative method to study brain function is by triggering oscillatory brain activity with transcranial magnetic stimulation (TMS). EEG-TMS coregistration was performed on five healthy subjects during a 1-day experimental session that involved four steps: baseline acquisition, unconditioned single-pulse TMS, intracortical inhibition (ICI, 3 ms) paired-pulse TMS, and transcallosal stimulation over left and right primary motor cortex (M1). A time-frequency analysis based on the wavelet method was used to characterize rapid modifications of oscillatory EEG rhythms induced by TMS. Single, paired, and transcallosal TMS applied on the sensorimotor areas induced rapid desynchronization over the frontal and central-parietal electrodes mainly in the alpha and beta bands, followed by a rebound of synchronization, and rapid synchronization of delta and theta activity. Wavelet analysis after a perturbation approach is a novel way to investigate modulation of oscillatory brain activity. The main findings are consistent with the concept that the human motor system may be based on networklike oscillatory cortical activity and might be modulated by single, paired, and transcallosal magnetic pulses applied to M1, suggesting a phenomenon of fast brain activity resetting and triggering of slow activity.


Author(s):  
Stefania Rasulo ◽  
Kenneth Vilhelmsen ◽  
F. R. van der Weel ◽  
Audrey L. H. van der Meer

AbstractThis study investigated evoked and oscillatory brain activity in response to forward visual motion at three different ecologically valid speeds, simulated through an optic flow pattern consisting of a virtual road with moving poles at either side of it. Participants were prelocomotor infants at 4–5 months, crawling infants at 9–11 months, primary school children at 6 years, adolescents at 12 years, and young adults. N2 latencies for motion decreased significantly with age from around 400 ms in prelocomotor infants to 325 ms in crawling infants, and from 300 and 275 ms in 6- and 12-year-olds, respectively, to 250 ms in adults. Infants at 4–5 months displayed the longest latencies and appeared unable to differentiate between motion speeds. In contrast, crawling infants at 9–11 months and 6-year-old children differentiated between low, medium and high speeds, with shortest latency for low speed. Adolescents and adults displayed similar short latencies for the three motion speeds, indicating that they perceived them as equally easy to detect. Time–frequency analyses indicated that with increasing age, participants showed a progression from low- to high-frequency desynchronized oscillatory brain activity in response to visual motion. The developmental differences in motion speed perception are interpreted in terms of a combination of neurobiological development and increased experience with self-produced locomotion. Our findings suggest that motion speed perception is not fully developed until adolescence, which has implications for children’s road traffic safety.


Author(s):  
Wentao Xie ◽  
Qian Zhang ◽  
Jin Zhang

Smart eyewear (e.g., AR glasses) is considered to be the next big breakthrough for wearable devices. The interaction of state-of-the-art smart eyewear mostly relies on the touchpad which is obtrusive and not user-friendly. In this work, we propose a novel acoustic-based upper facial action (UFA) recognition system that serves as a hands-free interaction mechanism for smart eyewear. The proposed system is a glass-mounted acoustic sensing system with several pairs of commercial speakers and microphones to sense UFAs. There are two main challenges in designing the system. The first challenge is that the system is in a severe multipath environment and the received signal could have large attenuation due to the frequency-selective fading which will degrade the system's performance. To overcome this challenge, we design an Orthogonal Frequency Division Multiplexing (OFDM)-based channel state information (CSI) estimation scheme that is able to measure the phase changes caused by a facial action while mitigating the frequency-selective fading. The second challenge is that because the skin deformation caused by a facial action is tiny, the received signal has very small variations. Thus, it is hard to derive useful information directly from the received signal. To resolve this challenge, we apply a time-frequency analysis to derive the time-frequency domain signal from the CSI. We show that the derived time-frequency domain signal contains distinct patterns for different UFAs. Furthermore, we design a Convolutional Neural Network (CNN) to extract high-level features from the time-frequency patterns and classify the features into six UFAs, namely, cheek-raiser, brow-raiser, brow-lower, wink, blink and neutral. We evaluate the performance of our system through experiments on data collected from 26 subjects. The experimental result shows that our system can recognize the six UFAs with an average F1-score of 0.92.


Sign in / Sign up

Export Citation Format

Share Document