scholarly journals Differential temporal dynamics during visual imagery and perception

2017 ◽  
Author(s):  
Nadine Dijkstra ◽  
Pim Mostert ◽  
Floris P. de Lange ◽  
Sander Bosch ◽  
Marcel A. J. van Gerven

Visual perception and imagery rely on similar representations in the visual cortex. During perception, visual activity is characterized by distinct processing stages, but the temporal dynamics underlying imagery remain unclear. Here, we investigated the dynamics of visual imagery in human participants using magnetoencephalography. We show that, contrary to perception, the onset of imagery is characterized by broad temporal generalization. Furthermore, there is consistent overlap between imagery and perceptual processing around 150 ms and from 300 ms after stimulus onset, presumably reflecting completion of the feedforward sweep and perceptual stabilization respectively. These results indicate that during imagery either the complete representation is activated at once and does not include low-level visual areas, or the order in which visual features are activated is less fixed and more flexible than during perception. These findings have important implications for our understanding of the neural mechanisms of visual imagery.

eLife ◽  
2018 ◽  
Vol 7 ◽  
Author(s):  
Nadine Dijkstra ◽  
Pim Mostert ◽  
Floris P de Lange ◽  
Sander Bosch ◽  
Marcel AJ van Gerven

Visual perception and imagery rely on similar representations in the visual cortex. During perception, visual activity is characterized by distinct processing stages, but the temporal dynamics underlying imagery remain unclear. Here, we investigated the dynamics of visual imagery in human participants using magnetoencephalography. Firstly, we show that, compared to perception, imagery decoding becomes significant later and representations at the start of imagery already overlap with later time points. This suggests that during imagery, the entire visual representation is activated at once or that there are large differences in the timing of imagery between trials. Secondly, we found consistent overlap between imagery and perceptual processing around 160 ms and from 300 ms after stimulus onset. This indicates that the N170 gets reactivated during imagery and that imagery does not rely on early perceptual representations. Together, these results provide important insights for our understanding of the neural mechanisms of visual imagery.


2021 ◽  
Author(s):  
Maria Melcon ◽  
Sander van Bree ◽  
Yolanda Sanchez-Carro ◽  
Laura Barreiro-Fernandez ◽  
Luca D. Kolibius ◽  
...  

While traditional studies claim that visuospatial attention stays fixed at one location at a time, recent research has rather shown that attention rhythmically fluctuates between different locations at rates of prominent brain rhythms. However, little is known about the temporal dynamics of this fluctuation and, particularly, whether it changes over time. Thus, we addressed this question by investigating how visuospatial attention behaves over space and time. We recorded electroencephalographic activity of twenty-seven human participants while they performed a visuospatial cueing task, where attention was covertly oriented to the left or right visual field. In order to decode the spatial locus of attention from neural activity, we trained and tested a classifier on every timepoint of the orienting period, from the attentional cue to stimulus onset. This resulted in one temporal generalization matrix per participant, which was time-frequency decomposed to identify the sampling rhythm. Finally, a searchlight analysis was conducted to reveal the brain regions responsible for attention allocation. Our results show a dynamic evolution of the attentional spotlight, distinguishing between two states. In an early time window, attention explored both cued and uncued hemifield rhythmically at ~10 Hz. In a later time window attention focused on the cued hemifield. Classification was driven by occipital sources, while frontal regions exclusively became involved just before the spotlight settled onto the cued location. Together, our results define attentional sampling as a quasi-rhythmic dynamic process characterized by an initial rhythmic exploration-exploitation state, which is followed by a stable exploitation state.


2013 ◽  
Vol 25 (3) ◽  
pp. 329-337 ◽  
Author(s):  
Tatiana Aloi Emmanouil ◽  
Philip Burton ◽  
Tony Ro

Unconscious processing has been convincingly demonstrated for task-relevant feature dimensions. However, it is possible that the visual system is capable of more complex unconscious operations, extracting visual features even when they are unattended and task irrelevant. In the current study, we addressed this question by measuring unconscious priming using a task in which human participants attended to a target object's shape while ignoring its color. We measured both behavioral priming effects and priming-related fMRI activations from primes that were unconsciously presented using metacontrast masking. The results showed faster RTs and decreases in fMRI activation only when the primes were identical to the targets, indicating that primes were processed both in the attended shape and the unattended color dimensions. Reductions in activation were observed in early visual areas, including primary visual cortex, as well as in feature-responsive areas for shape and color. These results indicate that multiple features can be unconsciously encoded and possibly bound using the same visual networks activated by consciously perceived images.


2019 ◽  
Vol 225 (1) ◽  
pp. 173-186
Author(s):  
Olena V. Bogdanova ◽  
Volodymyr B. Bogdanov ◽  
Jean-Baptiste Durand ◽  
Yves Trotter ◽  
Benoit R. Cottereau

AbstractThe objects located straight-ahead of the body are preferentially processed by the visual system. They are more rapidly detected and evoke stronger BOLD responses in early visual areas than elements that are retinotopically identical but located at eccentric spatial positions. To characterize the dynamics of the underlying neural mechanisms, we recorded in 29 subjects the EEG responses to peripheral targets differing solely by their locations with respect to the body. Straight-ahead stimuli led to stronger responses than eccentric stimuli for several components whose latencies ranged between 70 and 350 ms after stimulus onset. The earliest effects were found at 70 ms for a component that originates from occipital areas, the contralateral P1. To determine whether the straight-ahead direction affects primary visual cortex responses, we performed an additional experiment (n = 29) specifically designed to generate two robust components, the C1 and C2, whose cortical origins are constrained within areas V1, V2 and V3. Our analyses confirmed all the results of the first experiment and also revealed that the C2 amplitude between 130 and 160 ms after stimulus onset was significantly stronger for straight-ahead stimuli. A frequency analysis of the pre-stimulus baseline revealed that gaze-driven alterations in the visual hemi-field containing the straight-ahead direction were associated with a decrease in alpha power in the contralateral hemisphere, suggesting the implication of specific neural modulations before stimulus onset. Altogether, our EEG data demonstrate that preferential responses to the straight-ahead direction can be detected in the visual cortex as early as about 70 ms after stimulus onset.


2004 ◽  
Vol 92 (5) ◽  
pp. 3030-3042 ◽  
Author(s):  
Jay Hegdé ◽  
David C. Van Essen

The firing rate of visual cortical neurons typically changes substantially during a sustained visual stimulus. To assess whether, and to what extent, the information about shape conveyed by neurons in visual area V2 changes over the course of the response, we recorded the responses of V2 neurons in awake, fixating monkeys while presenting a diverse set of static shape stimuli within the classical receptive field. We analyzed the time course of various measures of responsiveness and stimulus-related response modulation at the level of individual cells and of the population. For a majority of V2 cells, the response modulation was maximal during the initial transient response (40–80 ms after stimulus onset). During the same period, the population response was relatively correlated, in that V2 cells tended to respond similarly to specific subsets of stimuli. Over the ensuing 80–100 ms, the signal-to-noise ratio of individual cells generally declined, but to a lesser degree than the evoked-response rate during the corresponding time bins, and the response profiles became decorrelated for many individual cells. Concomitantly, the population response became substantially decorrelated. Our results indicate that the information about stimulus shape evolves dynamically and relatively rapidly in V2 during static visual stimulation in ways that may contribute to form discrimination.


2021 ◽  
Author(s):  
Tao Yu ◽  
Shihui Han

Perceived cues signaling others' pain induce empathy that in turn motivates altruistic behavior toward those who appear suffering. This perception-emotion-behavior reactivity is the core of human altruism but does not always occur in real life situations. Here, by integrating behavioral and multimodal neuroimaging measures, we investigate neural mechanisms underlying the functional role of beliefs of others' pain in modulating empathy and altruism. We show evidence that decreasing (or enhancing) beliefs of others' pain reduces (or increases) subjective estimation of others' painful emotional states and monetary donations to those who show pain expressions. Moreover, decreasing beliefs of others' pain attenuates neural responses to perceived cues signaling others' pain within 200 ms after stimulus onset and modulate neural responses to others' pain in the frontal cortices and temporoparietal junction. Our findings highlight beliefs of others' pain as a fundamental cognitive basis of human empathy and altruism and unravel the intermediate neural architecture.


Author(s):  
Tatiana Malevich ◽  
Antimo Buonocore ◽  
Ziad M. Hafed

AbstractMicrosaccades have a steady rate of occurrence during maintained gaze fixation, which gets transiently modulated by abrupt sensory stimuli. Such modulation, characterized by a rapid reduction in microsaccade frequency followed by a stronger rebound phase of high microsaccade rate, is often described as the microsaccadic rate signature, owing to its stereotyped nature. Here we investigated the impacts of stimulus polarity (luminance increments or luminance decrements relative to background luminance) and size on the microsaccadic rate signature. We presented brief visual flashes consisting of large or small white or black stimuli over an otherwise gray image background. Both large and small stimuli caused robust early microsaccadic inhibition, but only small ones caused a subsequent increase in microsaccade frequency above baseline microsaccade rate. Critically, small black stimuli were always associated with stronger modulations in microsaccade rate after stimulus onset than small white stimuli, particularly in the post-inhibition rebound phase of the microsaccadic rate signature. Because small stimuli were also associated with expected direction oscillations to and away from their locations of appearance, these stronger rate modulations in the rebound phase meant higher likelihoods of microsaccades opposite the black flash locations relative to the white flash locations. Our results demonstrate that the microsaccadic rate signature is sensitive to stimulus polarity, and they point to dissociable neural mechanisms underlying early microsaccadic inhibition after stimulus onset and later microsaccadic rate rebound at longer times thereafter. These results also demonstrate early access of oculomotor control circuitry to sensory representations, particularly for momentarily inhibiting saccade generation.New and noteworthyMicrosaccades are small saccades that occur during gaze fixation. Microsaccade rate is transiently reduced after sudden stimulus onsets, and then strongly rebounds before returning to baseline. We explored the influence of stimulus polarity (black versus white) on this “rate signature”. We found that small black stimuli cause stronger microsaccadic modulations than white ones, but primarily in the rebound phase. This suggests dissociated neural mechanisms for microsaccadic inhibition and subsequent rebound in the microsaccadic rate signature.


2021 ◽  
Author(s):  
Yingying Huang ◽  
Frank Pollick ◽  
Ming Liu ◽  
Delong Zhang

Abstract Visual mental imagery and visual perception have been shown to share a hierarchical topological visual structure of neural representation. Meanwhile, many studies have reported a dissociation of neural substrate between mental imagery and perception in function and structure. However, we have limited knowledge about how the visual hierarchical cortex involved into internally generated mental imagery and perception with visual input. Here we used a dataset from previous fMRI research (Horikawa & Kamitani, 2017), which included a visual perception and an imagery experiment with human participants. We trained two types of voxel-wise encoding models, based on Gabor features and activity patterns of high visual areas, to predict activity in the early visual cortex (EVC, i.e., V1, V2, V3) during perception, and then evaluated the performance of these models during mental imagery. Our results showed that during perception and imagery, activities in the EVC could be independently predicted by the Gabor features and activity of high visual areas via encoding models, which suggested that perception and imagery might share neural representation in the EVC. We further found that there existed a Gabor-specific and a non-Gabor-specific neural response pattern to stimuli in the EVC, which were shared by perception and imagery. These findings provide insight into mechanisms of how visual perception and imagery shared representation in the EVC.


2017 ◽  
Author(s):  
Nicolas Burra ◽  
Dirk Kerzel ◽  
David Munoz ◽  
Didier Grandjean ◽  
Leonardo Ceravolo

Salient vocalizations, especially aggressive voices, are believed to attract attention due to an automatic threat detection system. However, studies assessing the temporal dynamics of auditory spatial attention to aggressive voices are missing. Using event-related potential markers of auditory spatial attention (N2ac and LPCpc), we show that attentional processing of threatening vocal signals is enhanced at two different stages of auditory processing. As early as 200 ms post stimulus onset, attentional orienting/engagement is enhanced for threatening as compared to happy vocal signals. Subsequently, as early as 400 ms post stimulus onset, the reorienting of auditory attention to the center of the screen (or disengagement from the target) is enhanced. This latter effect is consistent with the need to optimize perception by balancing the intake of stimulation from left and right auditory space. Our results extend the scope of theories from the visual to the auditory modality by showing that threatening stimuli also bias early spatial attention in the auditory modality. Although not the focus of the present work, we observed that the attentional enhancement was more pronounced in female than male participants.


1998 ◽  
Vol 8 (2) ◽  
pp. 202-210 ◽  
Author(s):  
Steven A Hillyard ◽  
Wolfgang A Teder-Sälejärvi ◽  
Thomas F Münte

Sign in / Sign up

Export Citation Format

Share Document