scholarly journals Shared neural mechanisms of visual perception and imagery

2019 ◽  
Author(s):  
Nadine Dijkstra ◽  
Sander Erik Bosch ◽  
Marcel van Gerven

For decades, the extent to which visual imagery relies on similar neural mechanisms as visual perception has been a topic of debate. Here, we review recent neuroimaging studies comparing these two forms of visual experience. Their results suggest that there is large overlap in neural processing during perception and imagery: neural representations of imagined and perceived stimuli are similar in visual, parietal and frontal cortex. Furthermore, perception and imagery seem to rely on similar top-down connectivity. The most prominent difference is the absence of bottom-up processing during imagery. These findings fit well with the idea that imagery and perception rely on similar emulation or prediction processes.

2019 ◽  
Vol 2019 (1) ◽  
Author(s):  
Erik L Meijs ◽  
Pim Mostert ◽  
Heleen A Slagter ◽  
Floris P de Lange ◽  
Simon van Gaal

Abstract Subjective experience can be influenced by top-down factors, such as expectations and stimulus relevance. Recently, it has been shown that expectations can enhance the likelihood that a stimulus is consciously reported, but the neural mechanisms supporting this enhancement are still unclear. We manipulated stimulus expectations within the attentional blink (AB) paradigm using letters and combined visual psychophysics with magnetoencephalographic (MEG) recordings to investigate whether prior expectations may enhance conscious access by sharpening stimulus-specific neural representations. We further explored how stimulus-specific neural activity patterns are affected by the factors expectation, stimulus relevance and conscious report. First, we show that valid expectations about the identity of an upcoming stimulus increase the likelihood that it is consciously reported. Second, using a series of multivariate decoding analyses, we show that the identity of letters presented in and out of the AB can be reliably decoded from MEG data. Third, we show that early sensory stimulus-specific neural representations are similar for reported and missed target letters in the AB task (active report required) and an oddball task in which the letter was clearly presented but its identity was task-irrelevant. However, later sustained and stable stimulus-specific representations were uniquely observed when target letters were consciously reported (decision-dependent signal). Fourth, we show that global pre-stimulus neural activity biased perceptual decisions for a ‘seen’ response. Fifth and last, no evidence was obtained for the sharpening of sensory representations by top-down expectations. We discuss these findings in light of emerging models of perception and conscious report highlighting the role of expectations and stimulus relevance.


2016 ◽  
Vol 46 (8) ◽  
pp. 1735-1747 ◽  
Author(s):  
M. M. van Ommen ◽  
M. van Beilen ◽  
F. W. Cornelissen ◽  
H. G. O. M. Smid ◽  
H. Knegtering ◽  
...  

BackgroundLittle is known about visual hallucinations (VH) in psychosis. We investigated the prevalence and the role of bottom-up and top-down processing in VH. The prevailing view is that VH are probably related to altered top-down processing, rather than to distorted bottom-up processing. Conversely, VH in Parkinson's disease are associated with impaired visual perception and attention, as proposed by the Perception and Attention Deficit (PAD) model. Auditory hallucinations (AH) in psychosis, however, are thought to be related to increased attention.MethodOur retrospective database study included 1119 patients with non-affective psychosis and 586 controls. The Community Assessment of Psychic Experiences established the VH rate. Scores on visual perception tests [Degraded Facial Affect Recognition (DFAR), Benton Facial Recognition Task] and attention tests [Response Set-shifting Task, Continuous Performance Test-HQ (CPT-HQ)] were compared between 75 VH patients, 706 non-VH patients and 485 non-VH controls.ResultsThe lifetime VH rate was 37%. The patient groups performed similarly on cognitive tasks; both groups showed worse perception (DFAR) than controls. Non-VH patients showed worse attention (CPT-HQ) than controls, whereas VH patients did not perform differently.ConclusionsWe did not find significant VH-related impairments in bottom-up processing or direct top-down alterations. However, the results suggest a relatively spared attentional performance in VH patients, whereas face perception and processing speed were equally impaired in both patient groups relative to controls. This would match better with the increased attention hypothesis than with the PAD model. Our finding that VH frequently co-occur with AH may support an increased attention-induced ‘hallucination proneness’.


Perception ◽  
10.1068/p5850 ◽  
2007 ◽  
Vol 36 (10) ◽  
pp. 1513-1521 ◽  
Author(s):  
Simon Lacey ◽  
Christine Campbell ◽  
K Sathian

The relationship between visually and haptically derived representations of objects is an important question in multisensory processing and, increasingly, in mental representation. We review evidence for the format and properties of these representations, and address possible theoretical models. We explore the relevance of visual imagery processes and highlight areas for further research, including the neglected question of asymmetric performance in the visuo – haptic cross-modal memory paradigm. We conclude that the weight of evidence suggests the existence of a multisensory representation, spatial in format, and flexibly accessible by both bottom — up and top — down inputs, although efficient comparison between modality-specific representations cannot entirely be ruled out.


2020 ◽  
Author(s):  
Tomoyasu Horikawa ◽  
Yukiyasu Kamitani

SummaryVisual image reconstruction from brain activity produces images whose features are consistent with the neural representations in the visual cortex given arbitrary visual instances [1–3], presumably reflecting the person’s visual experience. Previous reconstruction studies have been concerned either with how stimulus images are faithfully reconstructed or with whether mentally imagined contents can be reconstructed in the absence of external stimuli. However, many lines of vision research have demonstrated that even stimulus perception is shaped both by stimulus-induced processes and top-down processes. In particular, attention (or the lack of it) is known to profoundly affect visual experience [4–8] and brain activity [9–21]. Here, to investigate how top-down attention impacts the neural representation of visual images and the reconstructions, we use a state-of-the-art method (deep image reconstruction [3]) to reconstruct visual images from fMRI activity measured while subjects attend to one of two images superimposed with equally weighted contrasts. Deep image reconstruction exploits the hierarchical correspondence between the brain and a deep neural network (DNN) to translate (decode) brain activity into DNN features of multiple layers, and then create images that are consistent with the decoded DNN features [3, 22, 23]. Using the deep image reconstruction model trained on fMRI responses to single natural images, we decode brain activity during the attention trials. Behavioral evaluations show that the reconstructions resemble the attended rather than the unattended images. The reconstructions can be modeled by superimposed images with contrasts biased to the attended one, which are comparable to the appearance of the stimuli under attention measured in a separate session. Attentional modulations are found in a broad range of hierarchical visual representations and mirror the brain–DNN correspondence. Our results demonstrate that top-down attention counters stimulus-induced responses and modulate neural representations to render reconstructions in accordance with subjective appearance. The reconstructions appear to reflect the content of visual experience and volitional control, opening a new possibility of brain-based communication and creation.


eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Kendrick N Kay ◽  
Jason D Yeatman

The ability to read a page of text or recognize a person's face depends on category-selective visual regions in ventral temporal cortex (VTC). To understand how these regions mediate word and face recognition, it is necessary to characterize how stimuli are represented and how this representation is used in the execution of a cognitive task. Here, we show that the response of a category-selective region in VTC can be computed as the degree to which the low-level properties of the stimulus match a category template. Moreover, we show that during execution of a task, the bottom-up representation is scaled by the intraparietal sulcus (IPS), and that the level of IPS engagement reflects the cognitive demands of the task. These results provide an account of neural processing in VTC in the form of a model that addresses both bottom-up and top-down effects and quantitatively predicts VTC responses.


2015 ◽  
Vol 38 ◽  
Author(s):  
Terry Marks-Tarlow ◽  
Jaak Panksepp

AbstractLane et al. are right: Troublesome memories can be therapeutically recontextualized. Reconsolidation of negative/traumatic memories within the context of positive/prosocial affects can facilitate diverse psychotherapies. Although neural mechanisms remain poorly understood, we discuss how nonlinear dynamics of various positive affects, heavily controlled by primal subcortical networks, may be critical for optimal benefits.


Sign in / Sign up

Export Citation Format

Share Document