scholarly journals Dynamic representations of faces in the human ventral visual stream link visual features to behaviour

2018 ◽  
Author(s):  
Diana C. Dima ◽  
Krish D. Singh

AbstractHumans can rapidly extract information from faces even in challenging viewing conditions, yet the neural representations supporting this ability are still not well understood. Here, we manipulated the presentation duration of backward-masked facial expressions and used magnetoencephalography (MEG) to investigate the computations underpinning rapid face processing. Multivariate analyses revealed two stages in face perception, with the ventral visual stream encoding facial features prior to facial configuration. When presentation time was reduced, the emergence of sustained featural and configural representations was delayed. Importantly, these representations explained behaviour during an expression recognition task. Together, these results describe the adaptable system linking visual features, brain and behaviour during face perception.


2019 ◽  
Author(s):  
Sushrut Thorat

A mediolateral gradation in neural responses for images spanning animals to artificial objects is observed in the ventral temporal cortex (VTC). Which information streams drive this organisation is an ongoing debate. Recently, in Proklova et al. (2016), the visual shape and category (“animacy”) dimensions in a set of stimuli were dissociated using a behavioural measure of visual feature information. fMRI responses revealed a neural cluster (extra-visual animacy cluster - xVAC) which encoded category information unexplained by visual feature information, suggesting extra-visual contributions to the organisation in the ventral visual stream. We reassess these findings using Convolutional Neural Networks (CNNs) as models for the ventral visual stream. The visual features developed in the CNN layers can categorise the shape-matched stimuli from Proklova et al. (2016) in contrast to the behavioural measures used in the study. The category organisations in xVAC and VTC are explained to a large degree by the CNN visual feature differences, casting doubt over the suggestion that visual feature differences cannot account for the animacy organisation. To inform the debate further, we designed a set of stimuli with animal images to dissociate the animacy organisation driven by the CNN visual features from the degree of familiarity and agency (thoughtfulness and feelings). Preliminary results from a new fMRI experiment designed to understand the contribution of these non-visual features are presented.



2017 ◽  
Author(s):  
Chris B Martin ◽  
Danielle Douglas ◽  
Rachel N Newsome ◽  
Louisa LY Man ◽  
Morgan D Barense

AbstractA tremendous body of research in cognitive neuroscience is aimed at understanding how object concepts are represented in the human brain. However, it remains unknown whether and where the visual and abstract conceptual features that define an object concept are integrated. We addressed this issue by comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli. Our results revealed evidence for distinctive coding of visual features in lateral occipital cortex, and conceptual features in the temporal pole and parahippocampal cortex. By contrast, we found evidence for integrative coding of visual and conceptual object features in perirhinal cortex. The neuroanatomical specificity of this effect was highlighted by results from a searchlight analysis. Taken together, our findings suggest that perirhinal cortex uniquely supports the representation of fully-specified object concepts through the integration of their visual and conceptual features.



2017 ◽  
Author(s):  
Radoslaw M. Cichy ◽  
Nikolaus Kriegeskorte ◽  
Kamila M. Jozwik ◽  
Jasper J.F. van den Bosch ◽  
Ian Charest

1AbstractVision involves complex neuronal dynamics that link the sensory stream to behaviour. To capture the richness and complexity of the visual world and the behaviour it entails, we used an ecologically valid task with a rich set of real-world object images. We investigated how human brain activity, resolved in space with functional MRI and in time with magnetoencephalography, links the sensory stream to behavioural responses. We found that behaviour-related brain activity emerged rapidly in the ventral visual pathway within 200ms of stimulus onset. The link between stimuli, brain activity, and behaviour could not be accounted for by either category membership or visual features (as provided by an artificial deep neural network model). Our results identify behaviourally-relevant brain activity during object vision, and suggest that object representations guiding behaviour are complex and can neither be explained by visual features or semantic categories alone. Our findings support the view that visual representations in the ventral visual stream need to be understood in terms of their relevance to behaviour, and highlight the importance of complex behavioural assessment for human brain mapping.



2019 ◽  
Vol 116 (36) ◽  
pp. 17723-17728 ◽  
Author(s):  
J. S. H. Taylor ◽  
Matthew H. Davis ◽  
Kathleen Rastle

Reading involves transforming arbitrary visual symbols into sounds and meanings. This study interrogated the neural representations in ventral occipitotemporal cortex (vOT) that support this transformation process. Twenty-four adults learned to read 2 sets of 24 novel words that shared phonemes and semantic categories but were written in different artificial orthographies. Following 2 wk of training, participants read the trained words while neural activity was measured with functional MRI. Representational similarity analysis on item pairs from the same orthography revealed that right vOT and posterior regions of left vOT were sensitive to basic visual similarity. Left vOT encoded letter identity and representations became more invariant to position along a posterior-to-anterior hierarchy. Item pairs that shared sounds or meanings, but were written in different orthographies with no letters in common, evoked similar neural patterns in anterior left vOT. These results reveal a hierarchical, posterior-to-anterior gradient in vOT, in which representations of letters become increasingly invariant to position and are transformed to convey spoken language information.



eLife ◽  
2018 ◽  
Vol 7 ◽  
Author(s):  
Chris B Martin ◽  
Danielle Douglas ◽  
Rachel N Newsome ◽  
Louisa LY Man ◽  
Morgan D Barense

A significant body of research in cognitive neuroscience is aimed at understanding how object concepts are represented in the human brain. However, it remains unknown whether and where the visual and abstract conceptual features that define an object concept are integrated. We addressed this issue by comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli. Our results revealed evidence for distinctive coding of visual features in lateral occipital cortex, and conceptual features in the temporal pole and parahippocampal cortex. By contrast, we found evidence for integrative coding of visual and conceptual object features in perirhinal cortex. The neuroanatomical specificity of this effect was highlighted by results from a searchlight analysis. Taken together, our findings suggest that perirhinal cortex uniquely supports the representation of fully specified object concepts through the integration of their visual and conceptual features.



2012 ◽  
Vol 2012 ◽  
pp. 1-9 ◽  
Author(s):  
Maurice Ptito ◽  
Isabelle Matteau ◽  
Arthur Zhi Wang ◽  
Olaf B. Paulson ◽  
Hartwig R. Siebner ◽  
...  

We used functional MRI (fMRI) to test the hypothesis that blind subjects recruit the ventral visual stream during nonhaptic tactile-form recognition. Congenitally blind and blindfolded sighted control subjects were scanned after they had been trained during four consecutive days to perform a tactile-form recognition task with the tongue display unit (TDU). Both groups learned the task at the same rate. In line with our hypothesis, the fMRI data showed that during nonhaptic shape recognition, blind subjects activated large portions of the ventral visual stream, including the cuneus, precuneus, inferotemporal (IT), cortex, lateral occipital tactile vision area (LOtv), and fusiform gyrus. Control subjects activated area LOtv and precuneus but not cuneus, IT and fusiform gyrus. These results indicate that congenitally blind subjects recruit key regions in the ventral visual pathway during nonhaptic tactile shape discrimination. The activation of LOtv by nonhaptic tactile shape processing in blind and sighted subjects adds further support to the notion that this area subserves an abstract or supramodal representation of shape. Together with our previous findings, our data suggest that the segregation of the efferent projections of the primary visual cortex into a dorsal and ventral visual stream is preserved in individuals blind from birth.



2014 ◽  
Vol 111 (1) ◽  
pp. 91-102 ◽  
Author(s):  
Leyla Isik ◽  
Ethan M. Meyers ◽  
Joel Z. Leibo ◽  
Tomaso Poggio

The human visual system can rapidly recognize objects despite transformations that alter their appearance. The precise timing of when the brain computes neural representations that are invariant to particular transformations, however, has not been mapped in humans. Here we employ magnetoencephalography decoding analysis to measure the dynamics of size- and position-invariant visual information development in the ventral visual stream. With this method we can read out the identity of objects beginning as early as 60 ms. Size- and position-invariant visual information appear around 125 ms and 150 ms, respectively, and both develop in stages, with invariance to smaller transformations arising before invariance to larger transformations. Additionally, the magnetoencephalography sensor activity localizes to neural sources that are in the most posterior occipital regions at the early decoding times and then move temporally as invariant information develops. These results provide previously unknown latencies for key stages of human-invariant object recognition, as well as new and compelling evidence for a feed-forward hierarchical model of invariant object recognition where invariance increases at each successive visual area along the ventral stream.



2010 ◽  
Vol 30 (49) ◽  
pp. 16601-16608 ◽  
Author(s):  
T. Egner ◽  
J. M. Monti ◽  
C. Summerfield


NeuroImage ◽  
2016 ◽  
Vol 128 ◽  
pp. 316-327 ◽  
Author(s):  
Marianna Boros ◽  
Jean-Luc Anton ◽  
Catherine Pech-Georgel ◽  
Jonathan Grainger ◽  
Marcin Szwed ◽  
...  


2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Ziqiang Wang ◽  
Xia Sun ◽  
Lijun Sun ◽  
Yuchun Huang

In many image classification applications, it is common to extract multiple visual features from different views to describe an image. Since different visual features have their own specific statistical properties and discriminative powers for image classification, the conventional solution for multiple view data is to concatenate these feature vectors as a new feature vector. However, this simple concatenation strategy not only ignores the complementary nature of different views, but also ends up with “curse of dimensionality.” To address this problem, we propose a novel multiview subspace learning algorithm in this paper, named multiview discriminative geometry preserving projection (MDGPP) for feature extraction and classification. MDGPP can not only preserve the intraclass geometry and interclass discrimination information under a single view, but also explore the complementary property of different views to obtain a low-dimensional optimal consensus embedding by using an alternating-optimization-based iterative algorithm. Experimental results on face recognition and facial expression recognition demonstrate the effectiveness of the proposed algorithm.



Sign in / Sign up

Export Citation Format

Share Document