scholarly journals Visuomotor temporal adaptation is tuned to gamma brain oscillatory coherence

2020 ◽  
Author(s):  
Clara Cámara ◽  
Cristina de la Malla ◽  
Josep Marco-Pallarés ◽  
Joan López-Moliner

ABSTRACTEvery time we use our smartphone, tablet, or other electronic devices we are exposed to temporal delays between our actions and the sensory feedback. We can compensate for such delays by adjusting our motor commands and doing so likely requires establishing new temporal mappings between motor areas and sensory predictions. However, little is known about the neural underpinnings that would support building new temporal correspondences between different brain areas. We here address the possibility that communication through coherence, which is thought to support neural interareal communication, lies behind the neural processes accounting for how humans cope with additional delays between motor and sensory areas. We recorded EEG activity while participants intercepted moving targets while seeing a cursor that followed their hand with a delay rather than their own hand. Participants adjusted their movements to the delayed visual feedback and intercepted the target with the cursor. The EEG data shows a significant increase in coherence of beta and gamma bands between visual and motor areas during the hand on-going movement towards interception. However, when looking at differences between participants depending on the level of adaptation, only the increase in gamma band correlated with the level of temporal adaptation. We are able to describe the time course of the coherence using coupled oscillators showing that the times at which high coherence is achieved are within useful ranges to solve the task. Altogether, these results evidence the functional relevance of brain coherence in a complex task where adapting to new delays is crucial.AUTHOR SUMMARYHumans are often exposed to delays between their actions and the incoming sensory feedback caused by actions. While there have been advances in the understanding of the conditions at which temporal adaptation can occur, little is known about the neural mechanisms enabling temporal adaptation. In the present study we measure brain activity (EEG) to investigate whether communication through coherence between motor and sensory areas might be responsible for one’s ability to cope with externally imposed delays in an interception task. We show evidence that neural coherence at gamma band between visual and motor areas is related to the degree of adaptation to temporal delays.

2004 ◽  
Vol 16 (3) ◽  
pp. 503-522 ◽  
Author(s):  
Matthias M. Müller ◽  
Andreas Keil

In the present study, subjects selectively attended to the color of checkerboards in a feature-based attention paradigm. Induced gamma band responses (GBRs), the induced alpha band, and the event-related potential (ERP) were analyzed to uncover neuronal dynamics during selective feature processing. Replicating previous ERP findings, the selection negativity (SN) with a latency of about 160 msec was extracted. Furthermore, and similarly to previous EEG studies, a gamma band peak in a time window between 290 and 380 msec was found. This peak had its major energy in the 55to 70-Hz range and was significantly larger for the attended color. Contrary to previous human induced gamma band studies, a much earlier 40to 50-Hz peak in a time window between 160 and 220 msec after stimulus onset and, thus, concurrently to the SN was prominent with significantly more energy for attended as opposed to unattended color. The induced alpha band (9.8–11.7 Hz), on the other hand, exhibited a marked suppression for attended color in a time window between 450 and 600 msec after stimulus onset. A comparison of the time course of the 40to 50-Hz and 55to 70-Hz induced GBR, the induced alpha band, and the ERP revealed temporal coincidences for changes in the morphology of these brain responses. Despite these similarities in the time domain, the cortical source configuration was found to discriminate between induced GBRs and the SN. Our results suggest that large-scale synchronous high-frequency brain activity as measured in the human GBR play a specific role in attentive processing of stimulus features.


2012 ◽  
Vol 24 (5) ◽  
pp. 1149-1164 ◽  
Author(s):  
Marcela Peña ◽  
Lucia Melloni

Spoken sentence comprehension relies on rapid and effortless temporal integration of speech units displayed at different rates. Temporal integration refers to how chunks of information perceived at different time scales are linked together by the listener in mapping speech sounds onto meaning. The neural implementation of this integration remains unclear. This study explores the role of short and long windows of integration in accessing meaning from long samples of speech. In a cross-linguistic study, we explore the time course of oscillatory brain activity between 1 and 100 Hz, recorded using EEG, during the processing of native and foreign languages. We compare oscillatory responses in a group of Italian and Spanish native speakers while they attentively listen to Italian, Japanese, and Spanish utterances, played either forward or backward. The results show that both groups of participants display a significant increase in gamma band power (55–75 Hz) only when they listen to their native language played forward. The increase in gamma power starts around 1000 msec after the onset of the utterance and decreases by its end, resembling the time course of access to meaning during speech perception. In contrast, changes in low-frequency power show similar patterns for both native and foreign languages. We propose that gamma band power reflects a temporal binding phenomenon concerning the coordination of neural assemblies involved in accessing meaning of long samples of speech.


2012 ◽  
Vol 24 (2) ◽  
pp. 521-529 ◽  
Author(s):  
Frank Oppermann ◽  
Uwe Hassler ◽  
Jörg D. Jescheniak ◽  
Thomas Gruber

The human cognitive system is highly efficient in extracting information from our visual environment. This efficiency is based on acquired knowledge that guides our attention toward relevant events and promotes the recognition of individual objects as they appear in visual scenes. The experience-based representation of such knowledge contains not only information about the individual objects but also about relations between them, such as the typical context in which individual objects co-occur. The present EEG study aimed at exploring the availability of such relational knowledge in the time course of visual scene processing, using oscillatory evoked gamma-band responses as a neural correlate for a currently activated cortical stimulus representation. Participants decided whether two simultaneously presented objects were conceptually coherent (e.g., mouse–cheese) or not (e.g., crown–mushroom). We obtained increased evoked gamma-band responses for coherent scenes compared with incoherent scenes beginning as early as 70 msec after stimulus onset within a distributed cortical network, including the right temporal, the right frontal, and the bilateral occipital cortex. This finding provides empirical evidence for the functional importance of evoked oscillatory activity in high-level vision beyond the visual cortex and, thus, gives new insights into the functional relevance of neuronal interactions. It also indicates the very early availability of experience-based knowledge that might be regarded as a fundamental mechanism for the rapid extraction of the gist of a scene.


2014 ◽  
Vol 111 (1) ◽  
pp. 112-127 ◽  
Author(s):  
L. Thaler ◽  
J. L. Milne ◽  
S. R. Arnott ◽  
D. Kish ◽  
M. A. Goodale

We have shown in previous research (Thaler L, Arnott SR, Goodale MA. PLoS One 6: e20162, 2011) that motion processing through echolocation activates temporal-occipital cortex in blind echolocation experts. Here we investigated how neural substrates of echo-motion are related to neural substrates of auditory source-motion and visual-motion. Three blind echolocation experts and twelve sighted echolocation novices underwent functional MRI scanning while they listened to binaural recordings of moving or stationary echolocation or auditory source sounds located either in left or right space. Sighted participants' brain activity was also measured while they viewed moving or stationary visual stimuli. For each of the three modalities separately (echo, source, vision), we then identified motion-sensitive areas in temporal-occipital cortex and in the planum temporale. We then used a region of interest (ROI) analysis to investigate cross-modal responses, as well as laterality effects. In both sighted novices and blind experts, we found that temporal-occipital source-motion ROIs did not respond to echo-motion, and echo-motion ROIs did not respond to source-motion. This double-dissociation was absent in planum temporale ROIs. Furthermore, temporal-occipital echo-motion ROIs in blind, but not sighted, participants showed evidence for contralateral motion preference. Temporal-occipital source-motion ROIs did not show evidence for contralateral preference in either blind or sighted participants. Our data suggest a functional segregation of processing of auditory source-motion and echo-motion in human temporal-occipital cortex. Furthermore, the data suggest that the echo-motion response in blind experts may represent a reorganization rather than exaggeration of response observed in sighted novices. There is the possibility that this reorganization involves the recruitment of “visual” cortical areas.


2020 ◽  
Author(s):  
Alina Pauline Liebisch ◽  
Thomas Eggert ◽  
Alina Shindy ◽  
Elia Valentini ◽  
Stephanie Irving ◽  
...  

AbstractBackgroundThe past two decades have seen a particular focus towards high-frequency neural activity in the gamma band (>30Hz). However, gamma band activity shares frequency range with unwanted artefacts from muscular activity.New MethodWe developed a novel approach to remove muscle artefacts from neurophysiological data. We re-analysed existing EEG data that were decomposed by a blind source separation method (independent component analysis, ICA), which helped to better spatially and temporally separate single muscle spikes. We then applied an adapting algorithm that detects these singled-out muscle spikes.ResultsWe obtained data almost free from muscle artefacts; we needed to remove significantly fewer artefact components from the ICA and we included more trials for the statistical analysis compared to standard ICA artefact removal. All pain-related cortical effects in the gamma band have been preserved, which underlines the high efficacy and precision of this algorithm.ConclusionsOur results show a significant improvement of data quality by preserving task-relevant gamma oscillations of cortical origin. We were able to precisely detect, gauge, and carve out single muscle spikes from the time course of neurophysiological measures. We advocate the application of the tool for studies investigating gamma activity that contain a rather low number of trials, as well as for data that are highly contaminated with muscle artefacts. This validation of our tool allows for the application on event-free continuous EEG, for which the artefact removal is more challenging.


2021 ◽  
Vol 2021 (2) ◽  
Author(s):  
Shira Baror ◽  
Biyu J He

Abstract Flipping through social media feeds, viewing exhibitions in a museum, or walking through the botanical gardens, people consistently choose to engage with and disengage from visual content. Yet, in most laboratory settings, the visual stimuli, their presentation duration, and the task at hand are all controlled by the researcher. Such settings largely overlook the spontaneous nature of human visual experience, in which perception takes place independently from specific task constraints and its time course is determined by the observer as a self-governing agent. Currently, much remains unknown about how spontaneous perceptual experiences unfold in the brain. Are all perceptual categories extracted during spontaneous perception? Does spontaneous perception inherently involve volition? Is spontaneous perception segmented into discrete episodes? How do different neural networks interact over time during spontaneous perception? These questions are imperative to understand our conscious visual experience in daily life. In this article we propose a framework for spontaneous perception. We first define spontaneous perception as a task-free and self-paced experience. We propose that spontaneous perception is guided by four organizing principles that grant it temporal and spatial structures. These principles include coarse-to-fine processing, continuity and segmentation, agency and volition, and associative processing. We provide key suggestions illustrating how these principles may interact with one another in guiding the multifaceted experience of spontaneous perception. We point to testable predictions derived from this framework, including (but not limited to) the roles of the default-mode network and slow cortical potentials in underlying spontaneous perception. We conclude by suggesting several outstanding questions for future research, extending the relevance of this framework to consciousness and spontaneous brain activity. In conclusion, the spontaneous perception framework proposed herein integrates components in human perception and cognition, which have been traditionally studied in isolation, and opens the door to understand how visual perception unfolds in its most natural context.


2019 ◽  
Author(s):  
Mattson Ogg ◽  
Thomas A. Carlson ◽  
L. Robert Slevc

Human listeners are bombarded by acoustic information that the brain rapidly organizes into coherent percepts of objects and events in the environment, which aids speech and music perception. The efficiency of auditory object recognition belies the critical constraint that acoustic stimuli necessarily require time to unfold. Using magentoencephalography (MEG), we studied the time course of the neural processes that transform dynamic acoustic information into auditory object representations. Participants listened to a diverse set of 36 tokens comprising everyday sounds from a typical human environment. Multivariate pattern analysis was used to decode the sound tokens from the MEG recordings. We show that sound tokens can be decoded from brain activity beginning 90 milliseconds after stimulus onset with peak decoding performance occurring at 155 milliseconds post stimulus onset. Decoding performance was primarily driven by differences between category representations (e.g., environmental vs. instrument sounds), although within-category decoding was better than chance. Representational similarity analysis revealed that these emerging neural representations were related to harmonic and spectrotemporal differences among the stimuli, which correspond to canonical acoustic features processed by the auditory pathway. Our findings begin to link the processing of physical sound properties with the perception of auditory objects and events in cortex.


2022 ◽  
pp. 1-13
Author(s):  
Audrey Siqi-Liu ◽  
Tobias Egner ◽  
Marty G. Woldorff

Abstract To adaptively interact with the uncertainties of daily life, we must match our level of cognitive flexibility to contextual demands—being more flexible when frequent shifting between different tasks is required and more stable when the current task requires a strong focus of attention. Such cognitive flexibility adjustments in response to changing contextual demands have been observed in cued task-switching paradigms, where the performance cost incurred by switching versus repeating tasks (switch cost) scales inversely with the proportion of switches (PS) within a block of trials. However, the neural underpinnings of these adjustments in cognitive flexibility are not well understood. Here, we recorded 64-channel EEG measures of electrical brain activity as participants switched between letter and digit categorization tasks in varying PS contexts, from which we extracted ERPs elicited by the task cue and alpha power differences during the cue-to-target interval and the resting precue period. The temporal resolution of the EEG allowed us to test whether contextual adjustments in cognitive flexibility are mediated by tonic changes in processing mode or by changes in phasic, task cue-triggered processes. We observed reliable modulation of behavioral switch cost by PS context that was mirrored in both cue-evoked ERP and time–frequency effects but not by blockwide precue EEG changes. These results indicate that different levels of cognitive flexibility are instantiated after the presentation of task cues, rather than by being maintained as a tonic state throughout low- or high-switch contexts.


2021 ◽  
pp. 8-12
Author(s):  
Marcos Nadal ◽  
Camilo J. Cela-Conde

The main goal of the article “The Neural Foundations of Aesthetic Appreciation” was to bring together the available evidence on the neural underpinnings of aesthetics from neuroimaging and neurology and offer an integral interpretative model. The authors relate how they wanted to explain how aesthetic appreciation was related to brain activity and why some studies had found that activity in some regions and other studies had found it in other regions. The authors proposed that there might be at least two stages of appreciation. The first stage is the formation of an initial impression. It involves perceptual processes interacting with attentional control signals and is mediated by a fronto-parieto-occipital network. The second stage is a deeper evaluation of the image and involves affective processes, searching for meaning, recalling personal experiences, and activating knowledge stored in memory.


2000 ◽  
Vol 23 (3) ◽  
pp. 400-401 ◽  
Author(s):  
A. Daffertshofer ◽  
T. D. Frank ◽  
C. E. Peper ◽  
P. J. Beek

A critical discussion is provided of three central assumptions underlying Nunez's approach to modeling cortical activity. A plea is made for neurophysiologically realistic models involving nonlinearities, multiple time scales, and stochasticity.


Sign in / Sign up

Export Citation Format

Share Document