scholarly journals Time course and shared neurocognitive mechanisms of mental imagery and visual perception

2020 ◽  
Author(s):  
Martin Maier ◽  
Romy Frömer ◽  
Johannes Rost ◽  
Werner Sommer ◽  
Rasha Abdel Rahman

AbstractWhen we imagine an object and when we actually see that object, similar brain regions become active. Yet, the time course of neurocognitive mechanisms that support imagery is still largely unknown. The current view holds that imagery does not share early perceptual mechanisms, but starts with high-level visual representations. However, evidence of early shared mechanisms is difficult to obtain because imagery and perception tasks typically differ in visual input. We therefore tracked electrophysiological brain responses while fully controlling visual input, (1) comparing imagery and perception of objects with varying amounts of associated knowledge, and (2) comparing the time courses of successful and incomplete imagery. Imagery and perception were similarly influenced by knowledge already at early stages, revealing shared mechanisms during low-level visual processing. It follows that imagery is not merely perception in reverse; instead, both are active and constructive processes, based on shared mechanisms starting at surprisingly early stages.

2018 ◽  
Vol 29 (8) ◽  
pp. 3380-3389
Author(s):  
Timothy J Andrews ◽  
Ryan K Smith ◽  
Richard L Hoggart ◽  
Philip I N Ulrich ◽  
Andre D Gouws

Abstract Individuals from different social groups interpret the world in different ways. This study explores the neural basis of these group differences using a paradigm that simulates natural viewing conditions. Our aim was to determine if group differences could be found in sensory regions involved in the perception of the world or were evident in higher-level regions that are important for the interpretation of sensory information. We measured brain responses from 2 groups of football supporters, while they watched a video of matches between their teams. The time-course of response was then compared between individuals supporting the same (within-group) or the different (between-group) team. We found high intersubject correlations in low-level and high-level regions of the visual brain. However, these regions of the brain did not show any group differences. Regions that showed higher correlations for individuals from the same group were found in a network of frontal and subcortical brain regions. The interplay between these regions suggests a range of cognitive processes from motor control to social cognition and reward are important in the establishment of social groups. These results suggest that group differences are primarily reflected in regions involved in the evaluation and interpretation of the sensory input.


2021 ◽  
Author(s):  
David Acunzo ◽  
David Melcher

Visual processing mainly occurs during fixation, periods separated by saccadic eye movements, necessitating a close coordination between sensory and motor systems. It has been suggested that the intention to make a saccade can modulate neural activity, including predictive changes, suppression of peri-saccadic retinal input and trans-saccadic integration. Consistent with this idea, modulations of neural activity around the time of saccades have been reported in non-human species, showing non-visually mediated, extraretinal responses in specific brain regions. In humans, however, peri-saccadic whole-brain activity has mainly been studied in the context of a perceptual task, making it difficult to disentangle activity related to the task, visual transients from retinal stimulation and non-visual (saccade-related) responses. We measured magnetoencephalography (MEG) theta (3–7 Hz) and alpha (8–12 Hz) activity during voluntary horizontal saccade execution between two fixation points. To distinguish between visually and non-visually mediated activity, participants engaged in three tasks: voluntary saccades in near-darkness, fixation with visual input shifted to simulate the saccade, and volitional saccades in total darkness. Using correlational analyses, we found that patterns of neural activity are consistent with contributions of two separate mechanisms, one related to saccades (non-visual/extraretinal) and the other linked to the processing of visual input at the beginning of the new fixation (visual/retinal). Changes in occipital alpha power and instantaneous frequency showed a similar time course in near-dark and simulated saccade conditions, suggesting an effect of visually evoked responses. In contrast, alterations in parietal-occipital theta power and phase clustering were consistent with a non-visually-driven (extraretinal) mechanism, with similar multivariate patterns for near-dark and full-darkness conditions. Some effects, such as theta phase reset and alterations in alpha power, showed separable contributions of both the saccade and visual transient, with differing time courses. This combination of visual and non-visual mechanisms may support sensorimotor integration during active vision.


2018 ◽  
Vol 36 (1) ◽  
pp. 14-20 ◽  
Author(s):  
Lingmin Jin ◽  
Jinbo Sun ◽  
Ziliang Xu ◽  
Xuejuan Yang ◽  
Peng Liu ◽  
...  

Objective To use a promising analytical method, namely intersubject synchronisation (ISS), to evaluate the brain activity associated with the instant effects of acupuncture and compare the findings with traditional general linear model (GLM) methods. Methods 30 healthy volunteers were recruited for this study. Block-designed manual acupuncture stimuli were delivered at SP6, and de qi sensations were measured after acupuncture stimulation. All subjects underwent functional MRI (fMRI) scanning during the acupuncture stimuli. The fMRI data were separately analysed by ISS and traditional GLM methods. Results All subjects experienced de qi sensations. ISS analysis showed that the regions activated during acupuncture stimulation at SP6 were mainly divided into five clusters based on the time courses. The time courses of clusters 1 and 2 were in line with the acupuncture stimulation pattern, and the active regions were mainly involved in the sensorimotor system and salience network. Clusters 3, 4 and 5 displayed an almost contrary time course relative to the stimulation pattern. The brain regions activated included the default mode network, descending pain modulation pathway and visual cortices. GLM analysis indicated that the brain responses associated with the instant effects of acupuncture were largely implicated in sensory and motor processing and sensory integration. Conclusion The ISS analysis considered the sustained effect of acupuncture and uncovered additional information not shown by GLM analysis. We suggest that ISS may be a suitable approach to investigate the brain responses associated with the instant effects of acupuncture.


2012 ◽  
Vol 24 (2) ◽  
pp. 521-529 ◽  
Author(s):  
Frank Oppermann ◽  
Uwe Hassler ◽  
Jörg D. Jescheniak ◽  
Thomas Gruber

The human cognitive system is highly efficient in extracting information from our visual environment. This efficiency is based on acquired knowledge that guides our attention toward relevant events and promotes the recognition of individual objects as they appear in visual scenes. The experience-based representation of such knowledge contains not only information about the individual objects but also about relations between them, such as the typical context in which individual objects co-occur. The present EEG study aimed at exploring the availability of such relational knowledge in the time course of visual scene processing, using oscillatory evoked gamma-band responses as a neural correlate for a currently activated cortical stimulus representation. Participants decided whether two simultaneously presented objects were conceptually coherent (e.g., mouse–cheese) or not (e.g., crown–mushroom). We obtained increased evoked gamma-band responses for coherent scenes compared with incoherent scenes beginning as early as 70 msec after stimulus onset within a distributed cortical network, including the right temporal, the right frontal, and the bilateral occipital cortex. This finding provides empirical evidence for the functional importance of evoked oscillatory activity in high-level vision beyond the visual cortex and, thus, gives new insights into the functional relevance of neuronal interactions. It also indicates the very early availability of experience-based knowledge that might be regarded as a fundamental mechanism for the rapid extraction of the gist of a scene.


2018 ◽  
Author(s):  
Anthony Stigliani ◽  
Brianna Jeska ◽  
Kalanit Grill-Spector

ABSTRACTHow do high-level visual regions process the temporal aspects of our visual experience? While the temporal sensitivity of early visual cortex has been studied with fMRI in humans, temporal processing in high-level visual cortex is largely unknown. By modeling neural responses with millisecond precision in separate sustained and transient channels, and introducing a flexible encoding framework that captures differences in neural temporal integration time windows and response nonlinearities, we predict fMRI responses across visual cortex for stimuli ranging from 33 ms to 20 s. Using this innovative approach, we discovered that lateral category-selective regions respond to visual transients associated with stimulus onsets and offsets but not sustained visual information. Thus, lateral category-selective regions compute moment-tomoment visual transitions, but not stable features of the visual input. In contrast, ventral category-selective regions respond to both sustained and transient components of the visual input. Responses to sustained stimuli exhibit adaptation, whereas responses to transient stimuli are surprisingly larger for stimulus offsets than onsets. This large offset transient response may reflect a memory trace of the stimulus when it is no longer visible, whereas the onset transient response may reflect rapid processing of new items. Together, these findings reveal previously unconsidered, fundamental temporal mechanisms that distinguish visual streams in the human brain. Importantly, our results underscore the promise of modeling brain responses with millisecond precision to understand the underlying neural computations.AUTHOR SUMMARYHow does the brain encode the timing of our visual experience? Using functional magnetic resonance imaging (fMRI) and a temporal encoding model with millisecond resolution, we discovered that visual regions in the lateral and ventral processing streams fundamentally differ in their temporal processing of the visual input. Regions in lateral temporal cortex process visual transients associated with stimulus onsets and offsets but not the unchanging aspects of the visual input. That is, they compute moment-to-moment changes in the visual input. In contrast, regions in ventral temporal cortex process both stable and transient components, with the former exhibiting adaptation. Surprisingly, in these ventral regions responses to stimulus offsets were larger than onsets. We suggest that the former may reflect a memory trace of the stimulus, when it is no longer visible, and the latter may reflect rapid processing of new items at stimulus onset. Together, these findings (i) reveal a fundamental temporal mechanism that distinguishes visual streams and (ii) highlight both the importance and utility of modeling brain responses with millisecond precision to understand the temporal dynamics of neural computations in the human brain.


Author(s):  
Kübra Eroğlu ◽  
Temel Kayıkçıoğlu ◽  
Onur Osman

The aim of this study was to examine brightness effect, which is the perceptual property of visual stimuli, on brain responses obtained during visual processing of these stimuli. For this purpose, brain responses of the brain to changes in brightness were explored comparatively using different emotional images (pleasant, unpleasant and neutral) with different luminance levels. Moreover, electroencephalography recordings from 12 different electrode sites of 31 healthy participants were used. The power spectra obtained from the analysis of the recordings using short time Fourier transform were analyzed, and a statistical analysis was performed on features extracted from these power spectra. Statistical findings obtained from electrophysiological data were compared with those obtained from behavioral data. The results showed that the brightness of visual stimuli affected the power of brain responses depending on frequency, time and location. According to the statistically verified findings, the distinctive effect of brightness occurred in the parietal and occipital regions for all the three types of stimuli. Accordingly, the increase in the brightness of pleasant and neutral images increased the average power of responses in the parietal and occipital regions whereas the increase in the brightness of unpleasant images decreased the average power of responses in these regions. However, the increase in brightness for all the three types of stimuli reduced the average power of frontal and central region responses (except for 100-300 ms time window for unpleasant stimuli). The statistical results obtained for unpleasant images were found to be in accordance with the behavioral data. The results also revealed that the brightness of visual stimuli could be represented by changing the activity power of the brain cortex. The main contribution of this research was to comprehensively examine brightness effect on brain activity for images with different emotional content and different frequency bands at different time windows of visual processing for different brain regions. The findings emphasized that the brightness of visual stimuli should be viewed as an important parameter in studies using emotional image techniques such as image classification, emotion evaluation and neuro-marketing.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
M. W. DiFrancesco ◽  
T. Van Dyk ◽  
M. Altaye ◽  
S. P. A. Drummond ◽  
D. W. Beebe

Abstract Neuroimaging studies of the Psychomotor Vigilance Task (PVT) have revealed brain regions involved in attention lapses in sleep-deprived and well-rested adults. Those studies have focused on individual brain regions, rather than integrated brain networks, and have overlooked adolescence, a period of ongoing brain development and endemic short sleep. This study used functional MRI (fMRI) and a contemporary analytic approach to assess time-resolved peri-stimulus response of key brain networks when adolescents complete the PVT, and test for differences across attentive versus inattentive periods and after short sleep versus well-rested states. Healthy 14–17-year-olds underwent a within-subjects randomized protocol including 5-night spans of extended versus short sleep. PVT was performed during fMRI the morning after each sleep condition. Event-related independent component analysis (eICA) identified coactivating functional networks and corresponding time courses. Analysis of salient time course characteristics tested the effects of sleep condition, lapses, and their interaction. Seven eICA networks were identified supporting attention, executive control, motor, visual, and default-mode functions. Attention lapses, after either sleep manipulation, were accompanied by broadly increased response magnitudes post-stimulus and delayed peak responses in some networks. Well-circumscribed networks respond during the PVT in adolescents, with timing and intensity impacted by attentional lapses regardless of experimentally shortened or extended sleep.


2021 ◽  
Vol 12 ◽  
Author(s):  
Michela Balconi ◽  
Irene Venturella ◽  
Roberta Sebastiani ◽  
Laura Angioletti

To gain a deeper understanding of consumers' brain responses during a real-time in-store exploration could help retailers to get much closer to costumers' experience. To our knowledge, this is the first time the specific role of touch has been investigated by means of a neuroscientific approach during consumer in-store experience within the field of sensory marketing. This study explores the presence of distinct cortical brain oscillations in consumers' brain while navigating a store that provides a high level of sensory arousal and being allowed or not to touch products. A 16-channel wireless electroencephalogram (EEG) was applied to 23 healthy participants (mean age = 24.57 years, SD = 3.54), with interest in cosmetics but naive about the store explored. Subjects were assigned to two experimental conditions based on the chance of touching or not touching the products. Cortical oscillations were explored by means of power spectral analysis of the following frequency bands: delta, theta, alpha, and beta. Results highlighted the presence of delta, theta, and beta bands within the frontal brain regions during both sensory conditions. The absence of touch was experienced as a lack of perception that needs cognitive control, as reflected by Delta and Theta band left activation, whereas a right increase of Beta band for touch condition was associated with sustained awareness on the sensory experience. Overall, EEG cortical oscillations' functional meaning could help highlight the neurophysiological implicit responses to tactile conditions and the importance of touch integration in consumers' experience.


2019 ◽  
Author(s):  
Tanya Wen ◽  
John Duncan ◽  
Daniel J Mitchell

AbstractTask episodes consist of sequences of steps that are performed to achieve a goal. We used fMRI to examine neural representation of task identity, component items, and sequential position, focusing on two major cortical systems – the multiple-demand (MD) and default mode networks (DMN). Human participants (20 male, 22 female) learned six tasks each consisting of four steps. Inside the scanner, participants were cued which task to perform and then sequentially identified the target item of each step in the correct order. Univariate time-course analyses indicated that intra-episode progress was tracked by a tonically increasing global response, plus an increasing phasic step response specific to MD regions. Inter-episode boundaries evoked a widespread response at episode onset, plus a marked offset response specific to DMN regions. Representational similarity analysis was used to examine encoding of task identity and component steps. Both networks represented the content and position of individual steps, but the DMN preferentially represented task identity while the MD network preferentially represented step-level information. Thus, although both DMN and MD networks are sensitive to step-level and episode-level information in the context of hierarchical task performance, they exhibit dissociable profiles in terms of both temporal dynamics and representational content. The results suggest collaboration of multiple brain regions in control of multi-step behavior, with MD regions particularly involved in processing the detail of individual steps, and DMN adding representation of broad task context.Significance StatementAchieving one’s goals requires knowing what to do and when. Tasks are typically hierarchical, with smaller steps nested within overarching goals. For effective, flexible behavior, the brain must represent both levels. We contrast response time-courses and information content of two major cortical systems – the multiple-demand (MD) and default mode networks (DMN) – during multi-step task episodes. Both networks are sensitive to step-level and episode-level information, but with dissociable profiles. Intra-episode progress is tracked by tonically increasing global responses, plus MD-specific increasing phasic step responses. Inter-episode boundaries evoke widespread responses at episode onset, plus DMN-specific offset responses. Both networks encode content and position of individual steps, but the DMN and MD networks favor task identity and step-level information respectively.


2020 ◽  
Vol 1 (1) ◽  
Author(s):  
Runnan Cao ◽  
Xin Li ◽  
Alexander Todorov ◽  
Shuo Wang

Abstract An important question in human face perception research is to understand whether the neural representation of faces is dynamically modulated by context. In particular, although there is a plethora of neuroimaging literature that has probed the neural representation of faces, few studies have investigated what low-level structural and textural facial features parametrically drive neural responses to faces and whether the representation of these features is modulated by the task. To answer these questions, we employed 2 task instructions when participants viewed the same faces. We first identified brain regions that parametrically encoded high-level social traits such as perceived facial trustworthiness and dominance, and we showed that these brain regions were modulated by task instructions. We then employed a data-driven computational face model with parametrically generated faces and identified brain regions that encoded low-level variation in the faces (shape and skin texture) that drove neural responses. We further analyzed the evolution of the neural feature vectors along the visual processing stream and visualized and explained these feature vectors. Together, our results showed a flexible neural representation of faces for both low-level features and high-level social traits in the human brain.


Sign in / Sign up

Export Citation Format

Share Document