ventral visual stream
Recently Published Documents


TOTAL DOCUMENTS

133
(FIVE YEARS 50)

H-INDEX

23
(FIVE YEARS 4)

2021 ◽  
pp. 1-16
Author(s):  
Tao He ◽  
David Richter ◽  
Zhiguo Wang ◽  
Floris P. de Lange

Abstract Both spatial and temporal context play an important role in visual perception and behavior. Humans can extract statistical regularities from both forms of context to help process the present and to construct expectations about the future. Numerous studies have found reduced neural responses to expected stimuli compared with unexpected stimuli, for both spatial and temporal regularities. However, it is largely unclear whether and how these forms of context interact. In the current fMRI study, 33 human volunteers were exposed to pairs of object stimuli that could be expected or surprising in terms of their spatial and temporal context. We found reliable independent contributions of both spatial and temporal context in modulating the neural response. Specifically, neural responses to stimuli in expected compared with unexpected contexts were suppressed throughout the ventral visual stream. These results suggest that both spatial and temporal context may aid sensory processing in a similar fashion, providing evidence on how different types of context jointly modulate perceptual processing.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Seungdae Baek ◽  
Min Song ◽  
Jaeson Jang ◽  
Gwangsu Kim ◽  
Se-Bum Paik

AbstractFace-selective neurons are observed in the primate visual pathway and are considered as the basis of face detection in the brain. However, it has been debated as to whether this neuronal selectivity can arise innately or whether it requires training from visual experience. Here, using a hierarchical deep neural network model of the ventral visual stream, we suggest a mechanism in which face-selectivity arises in the complete absence of training. We found that units selective to faces emerge robustly in randomly initialized networks and that these units reproduce many characteristics observed in monkeys. This innate selectivity also enables the untrained network to perform face-detection tasks. Intriguingly, we observed that units selective to various non-face objects can also arise innately in untrained networks. Our results imply that the random feedforward connections in early, untrained deep neural networks may be sufficient for initializing primitive visual selectivity.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Irina Higgins ◽  
Le Chang ◽  
Victoria Langston ◽  
Demis Hassabis ◽  
Christopher Summerfield ◽  
...  

AbstractIn order to better understand how the brain perceives faces, it is important to know what objective drives learning in the ventral visual stream. To answer this question, we model neural responses to faces in the macaque inferotemporal (IT) cortex with a deep self-supervised generative model, β-VAE, which disentangles sensory data into interpretable latent factors, such as gender or age. Our results demonstrate a strong correspondence between the generative factors discovered by β-VAE and those coded by single IT neurons, beyond that found for the baselines, including the handcrafted state-of-the-art model of face perception, the Active Appearance Model, and deep classifiers. Moreover, β-VAE is able to reconstruct novel face images using signals from just a handful of cells. Together our results imply that optimising the disentangling objective leads to representations that closely resemble those in the IT at the single unit level. This points at disentangling as a plausible learning objective for the visual brain.


Author(s):  
Maya L. Rosen ◽  
Lucy A. Lurie ◽  
Kelly A. Sambrook ◽  
Andrew N. Meltzoff ◽  
Katie A. McLaughlin

2021 ◽  
Vol 21 (9) ◽  
pp. 2809
Author(s):  
Daniel Guest ◽  
Emily Allen ◽  
Yihan Wu ◽  
Thomas Naselaris ◽  
Michael Arcaro ◽  
...  

2021 ◽  
Author(s):  
Javier Orlandi ◽  
Mohammad Abdolrahmani ◽  
Ryo Aoki ◽  
Dmitry Lyamzin ◽  
Andrea Benucci

Abstract Choice information appears in multi-area brain networks mixed with sensory, motor, and cognitive variables. In the posterior cortex—traditionally implicated in decision computations—the presence, strength, and area specificity of choice signals are highly variable, limiting a cohesive understanding of their computational significance. Examining the mesoscale activity in the mouse posterior cortex during a visual task, we found that choice signals defined a decision variable in a low-dimensional embedding space with a prominent contribution along the ventral visual stream. Their subspace was near-orthogonal to concurrently represented sensory and motor-related activations, with modulations by task difficulty and by the animals’ attention state. A recurrent neural network trained with animals’ choices revealed an equivalent decision variable whose context-dependent dynamics agreed with that of the neural data. Our results demonstrated an independent, multi-area decision variable in the posterior cortex, controlled by task features and cognitive demands, possibly linked to contextual inference computations in dynamic animal–environment interactions.


2021 ◽  
Author(s):  
Hayley E Pickering ◽  
Jessica L Peters ◽  
Sheila Crewther

Literature examining the role of visual memory in vocabulary development during childhood is limited, despite it being well known that preverbal infants rely on their visual abilities to form memories and learn new words. Hence, this systematic review and meta-analysis utilised a cognitive neuroscience perspective to examine the association between visual memory and vocabulary development, including moderators such as age and task selection, in neurotypical children aged 2- to 12-years. Visual memory tasks were classified as spatio-temporal span tasks, visuo-perceptual or spatial concurrent array tasks, and executive judgment tasks. Visuo-perceptual concurrent array tasks expected to rely on ventral visual stream processing showed a moderate association with vocabulary, while tasks measuring spatio-temporal spans expected to be associated with dorsal visual stream processing, and executive judgments (central executive), showed only weak correlations with vocabulary. These findings have important implications for all health professionals and researchers interested in language, as they can support the development of more targeted language learning interventions that require ventral visual stream processing.


2021 ◽  
Vol 11 (8) ◽  
pp. 1004
Author(s):  
Jingwei Li ◽  
Chi Zhang ◽  
Linyuan Wang ◽  
Penghui Ding ◽  
Lulu Hu ◽  
...  

Visual encoding models are important computational models for understanding how information is processed along the visual stream. Many improved visual encoding models have been developed from the perspective of the model architecture and the learning objective, but these are limited to the supervised learning method. From the view of unsupervised learning mechanisms, this paper utilized a pre-trained neural network to construct a visual encoding model based on contrastive self-supervised learning for the ventral visual stream measured by functional magnetic resonance imaging (fMRI). We first extracted features using the ResNet50 model pre-trained in contrastive self-supervised learning (ResNet50-CSL model), trained a linear regression model for each voxel, and finally calculated the prediction accuracy of different voxels. Compared with the ResNet50 model pre-trained in a supervised classification task, the ResNet50-CSL model achieved an equal or even relatively better encoding performance in multiple visual cortical areas. Moreover, the ResNet50-CSL model performs hierarchical representation of input visual stimuli, which is similar to the human visual cortex in its hierarchical information processing. Our experimental results suggest that the encoding model based on contrastive self-supervised learning is a strong computational model to compete with supervised models, and contrastive self-supervised learning proves an effective learning method to extract human brain-like representations.


2021 ◽  
pp. JN-RM-2137-20
Author(s):  
Viola Mocz ◽  
Maryam Vaziri-Pashkam ◽  
Marvin Chun ◽  
Yaoda Xu

Sign in / Sign up

Export Citation Format

Share Document