scholarly journals Crossmodal Recruitment of the Ventral Visual Stream in Congenital Blindness

2012 ◽  
Vol 2012 ◽  
pp. 1-9 ◽  
Author(s):  
Maurice Ptito ◽  
Isabelle Matteau ◽  
Arthur Zhi Wang ◽  
Olaf B. Paulson ◽  
Hartwig R. Siebner ◽  
...  

We used functional MRI (fMRI) to test the hypothesis that blind subjects recruit the ventral visual stream during nonhaptic tactile-form recognition. Congenitally blind and blindfolded sighted control subjects were scanned after they had been trained during four consecutive days to perform a tactile-form recognition task with the tongue display unit (TDU). Both groups learned the task at the same rate. In line with our hypothesis, the fMRI data showed that during nonhaptic shape recognition, blind subjects activated large portions of the ventral visual stream, including the cuneus, precuneus, inferotemporal (IT), cortex, lateral occipital tactile vision area (LOtv), and fusiform gyrus. Control subjects activated area LOtv and precuneus but not cuneus, IT and fusiform gyrus. These results indicate that congenitally blind subjects recruit key regions in the ventral visual pathway during nonhaptic tactile shape discrimination. The activation of LOtv by nonhaptic tactile shape processing in blind and sighted subjects adds further support to the notion that this area subserves an abstract or supramodal representation of shape. Together with our previous findings, our data suggest that the segregation of the efferent projections of the primary visual cortex into a dorsal and ventral visual stream is preserved in individuals blind from birth.




2018 ◽  
Author(s):  
Diana C. Dima ◽  
Krish D. Singh

AbstractHumans can rapidly extract information from faces even in challenging viewing conditions, yet the neural representations supporting this ability are still not well understood. Here, we manipulated the presentation duration of backward-masked facial expressions and used magnetoencephalography (MEG) to investigate the computations underpinning rapid face processing. Multivariate analyses revealed two stages in face perception, with the ventral visual stream encoding facial features prior to facial configuration. When presentation time was reduced, the emergence of sustained featural and configural representations was delayed. Importantly, these representations explained behaviour during an expression recognition task. Together, these results describe the adaptable system linking visual features, brain and behaviour during face perception.



2019 ◽  
Author(s):  
Sushrut Thorat

A mediolateral gradation in neural responses for images spanning animals to artificial objects is observed in the ventral temporal cortex (VTC). Which information streams drive this organisation is an ongoing debate. Recently, in Proklova et al. (2016), the visual shape and category (“animacy”) dimensions in a set of stimuli were dissociated using a behavioural measure of visual feature information. fMRI responses revealed a neural cluster (extra-visual animacy cluster - xVAC) which encoded category information unexplained by visual feature information, suggesting extra-visual contributions to the organisation in the ventral visual stream. We reassess these findings using Convolutional Neural Networks (CNNs) as models for the ventral visual stream. The visual features developed in the CNN layers can categorise the shape-matched stimuli from Proklova et al. (2016) in contrast to the behavioural measures used in the study. The category organisations in xVAC and VTC are explained to a large degree by the CNN visual feature differences, casting doubt over the suggestion that visual feature differences cannot account for the animacy organisation. To inform the debate further, we designed a set of stimuli with animal images to dissociate the animacy organisation driven by the CNN visual features from the degree of familiarity and agency (thoughtfulness and feelings). Preliminary results from a new fMRI experiment designed to understand the contribution of these non-visual features are presented.



2013 ◽  
Vol 220 (1) ◽  
pp. 205-219 ◽  
Author(s):  
Julian Caspers ◽  
Nicola Palomero-Gallagher ◽  
Svenja Caspers ◽  
Axel Schleicher ◽  
Katrin Amunts ◽  
...  


2010 ◽  
Vol 30 (49) ◽  
pp. 16601-16608 ◽  
Author(s):  
T. Egner ◽  
J. M. Monti ◽  
C. Summerfield


NeuroImage ◽  
2016 ◽  
Vol 128 ◽  
pp. 316-327 ◽  
Author(s):  
Marianna Boros ◽  
Jean-Luc Anton ◽  
Catherine Pech-Georgel ◽  
Jonathan Grainger ◽  
Marcin Szwed ◽  
...  


2018 ◽  
Author(s):  
Simona Monaco ◽  
Giulia Malfatti ◽  
Alessandro Zendron ◽  
Elisa Pellencin ◽  
Luca Turella

AbstractPredictions of upcoming movements are based on several types of neural signals that span the visual, somatosensory, motor and cognitive system. Thus far, pre-movement signals have been investigated while participants viewed the object to be acted upon. Here, we studied the contribution of information other than vision to the classification of preparatory signals for action, even in absence of online visual information. We used functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) to test whether the neural signals evoked by visual, memory-based and somato-motor information can be reliably used to predict upcoming actions in areas of the dorsal and ventral visual stream during the preparatory phase preceding the action, while participants were lying still. Nineteen human participants (nine women) performed one of two actions towards an object with their eyes open or closed. Despite the well-known role of ventral stream areas in visual recognition tasks and the specialization of dorsal stream areas in somato-motor processes, we decoded action intention in areas of both streams based on visual, memory-based and somato-motor signals. Interestingly, we could reliably decode action intention in absence of visual information based on neural activity evoked when visual information was available, and vice-versa. Our results show a similar visual, memory and somato-motor representation of action planning in dorsal and ventral visual stream areas that allows predicting action intention across domains, regardless of the availability of visual information.



2014 ◽  
Vol 14 (10) ◽  
pp. 985-985
Author(s):  
R. Lafer-Sousa ◽  
A. Kell ◽  
A. Takahashi ◽  
J. Feather ◽  
B. Conway ◽  
...  


eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Thomas SA Wallis ◽  
Christina M Funke ◽  
Alexander S Ecker ◽  
Leon A Gatys ◽  
Felix A Wichmann ◽  
...  

We subjectively perceive our visual field with high fidelity, yet peripheral distortions can go unnoticed and peripheral objects can be difficult to identify (crowding). Prior work showed that humans could not discriminate images synthesised to match the responses of a mid-level ventral visual stream model when information was averaged in receptive fields with a scaling of about half their retinal eccentricity. This result implicated ventral visual area V2, approximated ‘Bouma’s Law’ of crowding, and has subsequently been interpreted as a link between crowding zones, receptive field scaling, and our perceptual experience. However, this experiment never assessed natural images. We find that humans can easily discriminate real and model-generated images at V2 scaling, requiring scales at least as small as V1 receptive fields to generate metamers. We speculate that explaining why scenes look as they do may require incorporating segmentation and global organisational constraints in addition to local pooling.



2019 ◽  
Vol 224 (9) ◽  
pp. 3291-3308 ◽  
Author(s):  
Simona Monaco ◽  
Giulia Malfatti ◽  
Alessandro Zendron ◽  
Elisa Pellencin ◽  
Luca Turella


Sign in / Sign up

Export Citation Format

Share Document