scholarly journals Prediction-error signals to violated expectations about person identity and head orientation are doubly-dissociated across dorsal and ventral visual stream regions

NeuroImage ◽  
2020 ◽  
Vol 206 ◽  
pp. 116325 ◽  
Author(s):  
Jonathan E. Robinson ◽  
Will Woods ◽  
Sumie Leung ◽  
Jordy Kaufman ◽  
Michael Breakspear ◽  
...  
2018 ◽  
Author(s):  
Jonathan E. Robinson ◽  
Will Woods ◽  
Sumie Leung ◽  
Jordy Kaufman ◽  
Michael Breakspear ◽  
...  

AbstractPredictive coding theories of perception suggest the importance of constantly updated internal models of the world in predicting future sensory inputs. One implication of such models is that cortical regions whose function is to resolve particular stimulus attributes should also signal prediction violations with respect to those same stimulus attributes. Previously, through carefully designed experiments, we have demonstrated early-mid latency EEG/MEG prediction-error signals in the dorsal visual stream to violated expectations about stimulus orientation/trajectory, with localisations consistent with cortical areas processing motion and orientation. Here we extend those methods to simultaneously investigate the predictive processes in both dorsal and ventral visual streams. In this MEG study we employed a contextual trajectory paradigm that builds expectations using a series of image presentations. We created expectations about both face orientation and identity, either of which can subsequently be violated. Crucially this paradigm allows us to parametrically test double dissociations between these different types of violations. The study identified double dissociations across the type of violation in the dorsal and ventral visual streams, such that the right fusiform gyrus showed greater evidence of prediction-error signals to Identity violations than to Orientation violations, whereas the left angular gyrus and postcentral gyrus showed the opposite pattern of results. Our results suggest comparable processes for error checking and context updating in high-level expectations instantiated across both perceptual streams. Perceptual prediction-error signalling is initiated in regions associated with the processing of different stimulus properties.Significance StatementVisual processing occurs along ‘what’ and ‘where’ information streams that run, respectively along the ventral and dorsal surface of the posterior brain. Predictive coding models of perception imply prediction-error detection processes that are instantiated at the level where particular stimulus attributes are parsed. This implies that, for instance, when considering face stimuli, signals arising through violated expectations about the person identity of the stimulus should localise to the ventral stream, whereas signals arising through violated expectations about head orientation should localise to the dorsal stream. We test this in a magnetoencephalography source localisation study. The analysis confirmed that prediction-error signals to identity versus head-orientation occur with similar latency, but activate doubly-dissociated brain regions along ventral and dorsal processing streams.


2019 ◽  
Author(s):  
Sushrut Thorat

A mediolateral gradation in neural responses for images spanning animals to artificial objects is observed in the ventral temporal cortex (VTC). Which information streams drive this organisation is an ongoing debate. Recently, in Proklova et al. (2016), the visual shape and category (“animacy”) dimensions in a set of stimuli were dissociated using a behavioural measure of visual feature information. fMRI responses revealed a neural cluster (extra-visual animacy cluster - xVAC) which encoded category information unexplained by visual feature information, suggesting extra-visual contributions to the organisation in the ventral visual stream. We reassess these findings using Convolutional Neural Networks (CNNs) as models for the ventral visual stream. The visual features developed in the CNN layers can categorise the shape-matched stimuli from Proklova et al. (2016) in contrast to the behavioural measures used in the study. The category organisations in xVAC and VTC are explained to a large degree by the CNN visual feature differences, casting doubt over the suggestion that visual feature differences cannot account for the animacy organisation. To inform the debate further, we designed a set of stimuli with animal images to dissociate the animacy organisation driven by the CNN visual features from the degree of familiarity and agency (thoughtfulness and feelings). Preliminary results from a new fMRI experiment designed to understand the contribution of these non-visual features are presented.


NeuroImage ◽  
2016 ◽  
Vol 128 ◽  
pp. 316-327 ◽  
Author(s):  
Marianna Boros ◽  
Jean-Luc Anton ◽  
Catherine Pech-Georgel ◽  
Jonathan Grainger ◽  
Marcin Szwed ◽  
...  

2018 ◽  
Author(s):  
Simona Monaco ◽  
Giulia Malfatti ◽  
Alessandro Zendron ◽  
Elisa Pellencin ◽  
Luca Turella

AbstractPredictions of upcoming movements are based on several types of neural signals that span the visual, somatosensory, motor and cognitive system. Thus far, pre-movement signals have been investigated while participants viewed the object to be acted upon. Here, we studied the contribution of information other than vision to the classification of preparatory signals for action, even in absence of online visual information. We used functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) to test whether the neural signals evoked by visual, memory-based and somato-motor information can be reliably used to predict upcoming actions in areas of the dorsal and ventral visual stream during the preparatory phase preceding the action, while participants were lying still. Nineteen human participants (nine women) performed one of two actions towards an object with their eyes open or closed. Despite the well-known role of ventral stream areas in visual recognition tasks and the specialization of dorsal stream areas in somato-motor processes, we decoded action intention in areas of both streams based on visual, memory-based and somato-motor signals. Interestingly, we could reliably decode action intention in absence of visual information based on neural activity evoked when visual information was available, and vice-versa. Our results show a similar visual, memory and somato-motor representation of action planning in dorsal and ventral visual stream areas that allows predicting action intention across domains, regardless of the availability of visual information.


2014 ◽  
Vol 14 (10) ◽  
pp. 985-985
Author(s):  
R. Lafer-Sousa ◽  
A. Kell ◽  
A. Takahashi ◽  
J. Feather ◽  
B. Conway ◽  
...  

eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Thomas SA Wallis ◽  
Christina M Funke ◽  
Alexander S Ecker ◽  
Leon A Gatys ◽  
Felix A Wichmann ◽  
...  

We subjectively perceive our visual field with high fidelity, yet peripheral distortions can go unnoticed and peripheral objects can be difficult to identify (crowding). Prior work showed that humans could not discriminate images synthesised to match the responses of a mid-level ventral visual stream model when information was averaged in receptive fields with a scaling of about half their retinal eccentricity. This result implicated ventral visual area V2, approximated ‘Bouma’s Law’ of crowding, and has subsequently been interpreted as a link between crowding zones, receptive field scaling, and our perceptual experience. However, this experiment never assessed natural images. We find that humans can easily discriminate real and model-generated images at V2 scaling, requiring scales at least as small as V1 receptive fields to generate metamers. We speculate that explaining why scenes look as they do may require incorporating segmentation and global organisational constraints in addition to local pooling.


2019 ◽  
Vol 224 (9) ◽  
pp. 3291-3308 ◽  
Author(s):  
Simona Monaco ◽  
Giulia Malfatti ◽  
Alessandro Zendron ◽  
Elisa Pellencin ◽  
Luca Turella

2005 ◽  
Vol 28 (6) ◽  
pp. 737-757 ◽  
Author(s):  
Daniel Collerton ◽  
Elaine Perry ◽  
Ian McKeith

As many as two million people in the United Kingdom repeatedly see people, animals, and objects that have no objective reality. Hallucinations on the border of sleep, dementing illnesses, delirium, eye disease, and schizophrenia account for 90% of these. The remainder have rarer disorders. We review existing models of recurrent complex visual hallucinations (RCVH) in the awake person, including cortical irritation, cortical hyperexcitability and cortical release, top-down activation, misperception, dream intrusion, and interactive models. We provide evidence that these can neither fully account for the phenomenology of RCVH, nor for variations in the frequency of RCVH in different disorders. We propose a novel Perception and Attention Deficit (PAD) model for RCVH. A combination of impaired attentional binding and poor sensory activation of a correct proto-object, in conjunction with a relatively intact scene representation, bias perception to allow the intrusion of a hallucinatory proto-object into a scene perception. Incorporation of this image into a context-specific hallucinatory scene representation accounts for repetitive hallucinations. We suggest that these impairments are underpinned by disturbances in a lateral frontal cortex–ventral visual stream system. We show how the frequency of RCVH in different diseases is related to the coexistence of attentional and visual perceptual impairments; how attentional and perceptual processes can account for their phenomenology; and that diseases and other states with high rates of RCVH have cholinergic dysfunction in both frontal cortex and the ventral visual stream. Several tests of the model are indicated, together with a number of treatment options that it generates.


Sign in / Sign up

Export Citation Format

Share Document