Auditory modulation of oscillatory activity in extra-striate visual cortex and its contribution to audio–visual multisensory integration: A human intracranial EEG study

2012 ◽  
Vol 25 (0) ◽  
pp. 198
Author(s):  
Manuel R. Mercier ◽  
John J. Foxe ◽  
Ian C. Fiebelkorn ◽  
John S. Butler ◽  
Theodore H. Schwartz ◽  
...  

Investigations have traditionally focused on activity in the sensory cortices as a function of their respective sensory inputs. However, converging evidence from multisensory research has shown that neural activity in a given sensory region can be modulated by stimulation of other so-called ancillary sensory systems. Both electrophysiology and functional imaging support the occurrence of multisensory processing in human sensory cortex based on the latency of multisensory effects and their precise anatomical localization. Still, due to inherent methodological limitations, direct evidence of the precise mechanisms by which multisensory integration occurs within human sensory cortices is lacking. Using intracranial recordings in epileptic patients () undergoing presurgical evaluation, we investigated the neurophysiological basis of multisensory integration in visual cortex. Subdural electrical brain activity was recorded while patients performed a simple detection task of randomly ordered Auditory alone (A), Visual alone (V) and Audio–Visual stimuli (AV). We then performed time-frequency analysis: first we investigated each condition separately to evaluate responses compared to baseline, then we indexed multisensory integration using both the maximum criterion model (AV vs. V) and the additive model (AV vs. A+V). Our results show that auditory input significantly modulates neuronal activity in visual cortex by resetting the phase of ongoing oscillatory activity. This in turn leads to multisensory integration when auditory and visual stimuli are simultaneously presented.

2019 ◽  
Vol 29 (11) ◽  
pp. 4785-4802 ◽  
Author(s):  
L Chauvière ◽  
W Singer

Abstract In humans, neurofeedback (NFB) training has been used extensively and successfully to manipulate brain activity. Feedback signals were derived from EEG, fMRI, MEG, and intracranial recordings and modifications were obtained of the BOLD signal, of the power of oscillatory activity in distinct frequency bands and of single unit activity. The purpose of the present study was to examine whether neuronal activity could also be controlled by NFB in early sensory cortices whose activity is thought to be influenced mainly by sensory input rather than volitional control. We trained 2 macaque monkeys to enhance narrow band gamma oscillations in the primary visual cortex by providing them with an acoustic signal that reflected the power of gamma oscillations in a preselected band and rewarding increases of the feedback signal. Oscillations were assessed from local field potentials recorded with chronically implanted microelectrodes. Both monkeys succeeded to raise gamma activity in the absence of visual stimulation in the selected frequency band and at the site from which the NFB signal was derived. This suggests that top–down signals are not confined to just modulate stimulus induced responses but can actually drive or facilitate the gamma generating microcircuits even in a primary sensory area.


2022 ◽  
pp. 1-13
Author(s):  
Audrey Siqi-Liu ◽  
Tobias Egner ◽  
Marty G. Woldorff

Abstract To adaptively interact with the uncertainties of daily life, we must match our level of cognitive flexibility to contextual demands—being more flexible when frequent shifting between different tasks is required and more stable when the current task requires a strong focus of attention. Such cognitive flexibility adjustments in response to changing contextual demands have been observed in cued task-switching paradigms, where the performance cost incurred by switching versus repeating tasks (switch cost) scales inversely with the proportion of switches (PS) within a block of trials. However, the neural underpinnings of these adjustments in cognitive flexibility are not well understood. Here, we recorded 64-channel EEG measures of electrical brain activity as participants switched between letter and digit categorization tasks in varying PS contexts, from which we extracted ERPs elicited by the task cue and alpha power differences during the cue-to-target interval and the resting precue period. The temporal resolution of the EEG allowed us to test whether contextual adjustments in cognitive flexibility are mediated by tonic changes in processing mode or by changes in phasic, task cue-triggered processes. We observed reliable modulation of behavioral switch cost by PS context that was mirrored in both cue-evoked ERP and time–frequency effects but not by blockwide precue EEG changes. These results indicate that different levels of cognitive flexibility are instantiated after the presentation of task cues, rather than by being maintained as a tonic state throughout low- or high-switch contexts.


1999 ◽  
Vol 22 (2) ◽  
pp. 301-302 ◽  
Author(s):  
Wolfgang Skrandies

When words are read, the visual cortex is activated, independent of whether visual or motor associations are elicited. This word-evoked brain activity is significantly influenced by semantic meaning. Such effects occur very early after stimulus presentation (at latencies between 80 and 130 msec), indicating that semantic meaning activates different neuronal assemblies in the human visual cortex when words are processed.


2011 ◽  
Vol 23 (12) ◽  
pp. 4094-4105 ◽  
Author(s):  
Chien-Te Wu ◽  
Melissa E. Libertus ◽  
Karen L. Meyerhoff ◽  
Marty G. Woldorff

Several major cognitive neuroscience models have posited that focal spatial attention is required to integrate different features of an object to form a coherent perception of it within a complex visual scene. Although many behavioral studies have supported this view, some have suggested that complex perceptual discrimination can be performed even with substantially reduced focal spatial attention, calling into question the complexity of object representation that can be achieved without focused spatial attention. In the present study, we took a cognitive neuroscience approach to this problem by recording cognition-related brain activity both to help resolve the questions about the role of focal spatial attention in object categorization processes and to investigate the underlying neural mechanisms, focusing particularly on the temporal cascade of these attentional and perceptual processes in visual cortex. More specifically, we recorded electrical brain activity in humans engaged in a specially designed cued visual search paradigm to probe the object-related visual processing before and during the transition from distributed to focal spatial attention. The onset times of the color popout cueing information, indicating where within an object array the subject was to shift attention, was parametrically varied relative to the presentation of the array (i.e., either occurring simultaneously or being delayed by 50 or 100 msec). The electrophysiological results demonstrate that some levels of object-specific representation can be formed in parallel for multiple items across the visual field under spatially distributed attention, before focal spatial attention is allocated to any of them. The object discrimination process appears to be subsequently amplified as soon as focal spatial attention is directed to a specific location and object. This set of novel neurophysiological findings thus provides important new insights on fundamental issues that have been long-debated in cognitive neuroscience concerning both object-related processing and the role of attention.


2019 ◽  
Author(s):  
Stefania Ferraro ◽  
Markus J. Van Ackeren ◽  
Roberto Mai ◽  
Laura Tassi ◽  
Francesco Cardinale ◽  
...  

AbstractUnequivocally demonstrating the presence of multisensory signals at the earliest stages of cortical processing remains challenging in humans. In our study, we relied on the unique spatio-temporal resolution provided by intracranial stereotactic electroencephalographic (SEEG) recordings in patients with drug-resistant epilepsy to characterize the signal extracted from early visual (calcarine and pericalcarine) and auditory (Heschl’s gyrus and planum temporale) regions during a simple audio-visual oddball task. We provide evidences that both cross-modal responses (visual responses in auditory cortex or the reverse) and multisensory processing (alteration of the unimodal responses during bimodal stimulation) can be observed in intracranial event-related potentials (iERPs) and in power modulations of oscillatory activity at different temporal scales within the first 150 ms after stimulus onset. The temporal profiles of the iERPs are compatible with the hypothesis that MSI occurs by means of direct pathways linking early visual and auditory regions. Our data indicate, moreover, that MSI mainly relies on modulations of the low-frequency bands (foremost the theta band in the auditory cortex and the alpha band in the visual cortex), suggesting the involvement of feedback pathways between the two sensory regions. Remarkably, we also observed high-gamma power modulations by sounds in the early visual cortex, thus suggesting the presence of neuronal populations involved in auditory processing in the calcarine and pericalcarine region in humans.


2019 ◽  
Vol 286 (1912) ◽  
pp. 20191910 ◽  
Author(s):  
Liam J. Norman ◽  
Lore Thaler

The functional specializations of cortical sensory areas were traditionally viewed as being tied to specific modalities. A radically different emerging view is that the brain is organized by task rather than sensory modality, but it has not yet been shown that this applies to primary sensory cortices. Here, we report such evidence by showing that primary ‘visual’ cortex can be adapted to map spatial locations of sound in blind humans who regularly perceive space through sound echoes. Specifically, we objectively quantify the similarity between measured stimulus maps for sound eccentricity and predicted stimulus maps for visual eccentricity in primary ‘visual’ cortex (using a probabilistic atlas based on cortical anatomy) to find that stimulus maps for sound in expert echolocators are directly comparable to those for vision in sighted people. Furthermore, the degree of this similarity is positively related with echolocation ability. We also rule out explanations based on top-down modulation of brain activity—e.g. through imagery. This result is clear evidence that task-specific organization can extend even to primary sensory cortices, and in this way is pivotal in our reinterpretation of the functional organization of the human brain.


2020 ◽  
Author(s):  
Máté Gyurkovics ◽  
Grace M. Clements ◽  
Kathy A. Low ◽  
Monica Fabiani ◽  
Gabriele Gratton

AbstractTypically, time-frequency analysis (TFA) of electrophysiological data is aimed at isolating narrowband signals (oscillatory activity) from broadband non-oscillatory (1/f) activity, so that changes in oscillatory activity resulting from experimental manipulations can be assessed. A widely used method to do this is to convert the data to the decibel (dB) scale through baseline division and log transformation. This procedure assumes that, for each frequency, sources of power (i.e., oscillations and 1/f activity) scale by the same factor relative to the baseline (multiplicative model). This assumption may be incorrect when signal and noise are independent contributors to the power spectrum (additive model). Using resting-state EEG data from 80 participants, we found that the level of 1/f activity and alpha power are not positively correlated within participants, in line with the additive but not the multiplicative model. Then, to assess the effects of dB conversion on data that violate the multiplicativity assumption, we simulated a mixed design study with one between-subject (noise level, i.e., level of 1/f activity) and one within-subject (signal amplitude, i.e., amplitude of oscillatory activity added onto the background 1/f activity) factor. The effect size of the noise level × signal amplitude interaction was examined as a function of noise difference between groups, following dB conversion. Findings revealed that dB conversion led to the over- or under-estimation of the true interaction effect when groups differing in 1/f levels were compared, and it also led to the emergence of illusory interactions when none were present. This is because signal amplitude was systematically underestimated in the noisier compared to the less noisy group. Hence, we recommend testing whether the level of 1/f activity differs across groups or conditions and using multiple baseline correction strategies to validate results if it does. Such a situation may be particularly common in aging, developmental, or clinical studies.


2020 ◽  
Vol 82 (7) ◽  
pp. 3490-3506
Author(s):  
Jonathan Tong ◽  
Lux Li ◽  
Patrick Bruns ◽  
Brigitte Röder

Abstract According to the Bayesian framework of multisensory integration, audiovisual stimuli associated with a stronger prior belief that they share a common cause (i.e., causal prior) are predicted to result in a greater degree of perceptual binding and therefore greater audiovisual integration. In the present psychophysical study, we systematically manipulated the causal prior while keeping sensory evidence constant. We paired auditory and visual stimuli during an association phase to be spatiotemporally either congruent or incongruent, with the goal of driving the causal prior in opposite directions for different audiovisual pairs. Following this association phase, every pairwise combination of the auditory and visual stimuli was tested in a typical ventriloquism-effect (VE) paradigm. The size of the VE (i.e., the shift of auditory localization towards the spatially discrepant visual stimulus) indicated the degree of multisensory integration. Results showed that exposure to an audiovisual pairing as spatiotemporally congruent compared to incongruent resulted in a larger subsequent VE (Experiment 1). This effect was further confirmed in a second VE paradigm, where the congruent and the incongruent visual stimuli flanked the auditory stimulus, and a VE in the direction of the congruent visual stimulus was shown (Experiment 2). Since the unisensory reliabilities for the auditory or visual components did not change after the association phase, the observed effects are likely due to changes in multisensory binding by association learning. As suggested by Bayesian theories of multisensory processing, our findings support the existence of crossmodal causal priors that are flexibly shaped by experience in a changing world.


2018 ◽  
Author(s):  
Dmitriy Lisitsyn ◽  
Udo A. Ernst

1AbstractElectrical stimulation is a promising tool for interacting with neuronal dynamics to identify neural mechanisms that underlie cognitive function. Since effects of a single short stimulation pulse typically vary greatly and depend on the current network state, many experimental paradigms have rather resorted to continuous or periodic stimulation in order to establish and maintain a desired effect. However, such an approach explicitly leads to forced and ‘unnatural’ brain activity. Further, continuous stimulation can make it hard to parse the recorded activity and separate neural signal from stimulation artifacts. In this study we propose an alternate strategy: by monitoring a system in realtime, we use the existing preferred states or attractors of the network and to apply short and precise pulses in order to switch between its preferred states. When pushed into one of its attractors, one can use the natural tendency of the system to remain in such a state to prolong the effect of a stimulation pulse, opening a larger window of opportunity to observe the consequences on cognitive processing. To elaborate on this idea, we consider flexible information routing in the visual cortex as a prototypical example. When processing a stimulus, neural populations in the visual cortex have been found to engage in synchronized gamma activity. In this context, selective signal routing is achieved by changing the relative phase between oscillatory activity in sending and receiving populations (communication through coherence, CTC). In order to explore how perturbations interact with CTC, we investigate a biophysically realistic network exhibiting similar synchronization and signal routing phenomena. We develop a closed-loop stimulation paradigm based on the phase-response characteristics of the network and demonstrate its ability to establish desired synchronization states. By measuring information content throughout the model, we evaluate the effect of signal contamination caused by the stimulation in relation to the magnitude of the injected pulses and intrinsic noise in the system. Finally, we demonstrate that, up to a critical noise level, precisely timed perturbations can be used to artificially induce the effect of attention by selectively routing visual signals to higher cortical areas.


Sign in / Sign up

Export Citation Format

Share Document