scholarly journals Distributed context-dependent choice information in mouse posterior cortex

Author(s):  
Javier Orlandi ◽  
Mohammad Abdolrahmani ◽  
Ryo Aoki ◽  
Dmitry Lyamzin ◽  
Andrea Benucci

Abstract Choice information appears in multi-area brain networks mixed with sensory, motor, and cognitive variables. In the posterior cortex—traditionally implicated in decision computations—the presence, strength, and area specificity of choice signals are highly variable, limiting a cohesive understanding of their computational significance. Examining the mesoscale activity in the mouse posterior cortex during a visual task, we found that choice signals defined a decision variable in a low-dimensional embedding space with a prominent contribution along the ventral visual stream. Their subspace was near-orthogonal to concurrently represented sensory and motor-related activations, with modulations by task difficulty and by the animals’ attention state. A recurrent neural network trained with animals’ choices revealed an equivalent decision variable whose context-dependent dynamics agreed with that of the neural data. Our results demonstrated an independent, multi-area decision variable in the posterior cortex, controlled by task features and cognitive demands, possibly linked to contextual inference computations in dynamic animal–environment interactions.

2021 ◽  
Author(s):  
Javier Orlandi ◽  
Mohammad Adbolrahmani ◽  
Ryo Aoki ◽  
Dmitry Lyamzin ◽  
Andrea Benucci

Abstract Choice information appears in the brain as distributed signals with top-down and bottom-up components that together support decision-making computations. In sensory and associative cortical regions, the presence of choice signals, their strength, and area specificity are known to be elusive and changeable, limiting a cohesive understanding of their computational significance. In this study, examining the mesoscale activity in mouse posterior cortex during a complex visual discrimination task, we found that broadly distributed choice signals defined a decision variable in a low-dimensional embedding space of multi-area activations, particularly along the ventral visual stream. The subspace they defined was near-orthogonal to concurrently represented sensory and motor-related activations, and it was modulated by task difficulty and contextually by the animals’ attention state. To mechanistically relate choice representations to decision-making computations, we trained recurrent neural networks with the animals’ choices and found an equivalent decision variable whose context-dependent dynamics agreed with that of the neural data. In conclusion, our results demonstrated an independent decision variable broadly represented in the posterior cortex, controlled by task features and cognitive demands. Its dynamics reflected decision computations, possibly linked to context-dependent feedback signals used for probabilistic-inference computations in variable animal-environment interactions.


2021 ◽  
Author(s):  
Javier G. Orlandi ◽  
Mohammad Abdolrahmani ◽  
Ryo Aoki ◽  
Dmitry R. Lyamzin ◽  
Andrea Benucci

Choice information appears in the brain as distributed signals with top-down and bottom-up components that together support decision-making computations. In sensory and associative cortical regions, the presence of choice signals, their strength, and area specificity are known to be elusive and changeable, limiting a cohesive understanding of their computational significance. In this study, examining the mesoscale activity in mouse posterior cortex during a complex visual discrimination task, we found that broadly distributed choice signals defined a decision variable in a low-dimensional embedding space of multi-area activations, particularly along the ventral visual stream. The subspace they defined was near-orthogonal to concurrently represented sensory and motor-related activations, and it was modulated by task difficulty and contextually by the animals’ attention state. To mechanistically relate choice representations to decision-making computations, we trained recurrent neural networks with the animals’ choices and found an equivalent decision variable whose context-dependent dynamics agreed with that of the neural data. In conclusion, our results demonstrated an independent decision variable broadly represented in the posterior cortex, controlled by task features and cognitive demands. Its dynamics reflected decision computations, possibly linked to context-dependent feedback signals used for probabilistic-inference computations in variable animal-environment interactions.


2021 ◽  
Author(s):  
Talia Konkle ◽  
George A Alvarez

Anterior regions of the ventral visual stream have substantial information about object categories, prompting theories that category-level forces are critical for shaping visual representation. The strong correspondence between category-supervised deep neural networks and ventral stream representation supports this view, but does not provide a viable learning model, as these deepnets rely upon millions of labeled examples. Here we present a fully self-supervised model which instead learns to represent individual images, where views of the same image are embedded nearby in a low-dimensional feature space, distinctly from other recently encountered views. We find category information implicitly emerges in the feature space, and critically that these models achieve parity with category-supervised models in predicting the hierarchical structure of brain responses across the human ventral visual stream. These results provide computational support for learning instance-level representation as a viable goal of the ventral stream, offering an alternative to the category-based framework that has been dominant in visual cognitive neuroscience.


2019 ◽  
Author(s):  
Sushrut Thorat

A mediolateral gradation in neural responses for images spanning animals to artificial objects is observed in the ventral temporal cortex (VTC). Which information streams drive this organisation is an ongoing debate. Recently, in Proklova et al. (2016), the visual shape and category (“animacy”) dimensions in a set of stimuli were dissociated using a behavioural measure of visual feature information. fMRI responses revealed a neural cluster (extra-visual animacy cluster - xVAC) which encoded category information unexplained by visual feature information, suggesting extra-visual contributions to the organisation in the ventral visual stream. We reassess these findings using Convolutional Neural Networks (CNNs) as models for the ventral visual stream. The visual features developed in the CNN layers can categorise the shape-matched stimuli from Proklova et al. (2016) in contrast to the behavioural measures used in the study. The category organisations in xVAC and VTC are explained to a large degree by the CNN visual feature differences, casting doubt over the suggestion that visual feature differences cannot account for the animacy organisation. To inform the debate further, we designed a set of stimuli with animal images to dissociate the animacy organisation driven by the CNN visual features from the degree of familiarity and agency (thoughtfulness and feelings). Preliminary results from a new fMRI experiment designed to understand the contribution of these non-visual features are presented.


NeuroImage ◽  
2016 ◽  
Vol 128 ◽  
pp. 316-327 ◽  
Author(s):  
Marianna Boros ◽  
Jean-Luc Anton ◽  
Catherine Pech-Georgel ◽  
Jonathan Grainger ◽  
Marcin Szwed ◽  
...  

2021 ◽  
Author(s):  
Xiaohan Zhang ◽  
Shenquan Liu ◽  
Zhe Sage Chen

AbstractPrefrontal cortex plays a prominent role in performing flexible cognitive functions and working memory, yet the underlying computational principle remains poorly understood. Here we trained a rate-based recurrent neural network (RNN) to explore how the context rules are encoded, maintained across seconds-long mnemonic delay, and subsequently used in a context-dependent decision-making task. The trained networks emerged key experimentally observed features in the prefrontal cortex (PFC) of rodent and monkey experiments, such as mixed-selectivity, sparse representations, neuronal sequential activity and rotation dynamics. To uncover the high-dimensional neural dynamical system, we further proposed a geometric framework to quantify and visualize population coding and sensory integration in a temporally-defined manner. We employed dynamic epoch-wise principal component analysis (PCA) to define multiple task-specific subspaces and task-related axes, and computed the angles between task-related axes and these subspaces. In low-dimensional neural representations, the trained RNN first encoded the context cues in a cue-specific subspace, and then maintained the cue information with a stable low-activity state persisting during the delay epoch, and further formed line attractors for sensor integration through low-dimensional neural trajectories to guide decision making. We demonstrated via intensive computer simulations that the geometric manifolds encoding the context information were robust to varying degrees of weight perturbation in both space and time. Overall, our analysis framework provides clear geometric interpretations and quantification of information coding, maintenance and integration, yielding new insight into the computational mechanisms of context-dependent computation.


2018 ◽  
Author(s):  
Simona Monaco ◽  
Giulia Malfatti ◽  
Alessandro Zendron ◽  
Elisa Pellencin ◽  
Luca Turella

AbstractPredictions of upcoming movements are based on several types of neural signals that span the visual, somatosensory, motor and cognitive system. Thus far, pre-movement signals have been investigated while participants viewed the object to be acted upon. Here, we studied the contribution of information other than vision to the classification of preparatory signals for action, even in absence of online visual information. We used functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) to test whether the neural signals evoked by visual, memory-based and somato-motor information can be reliably used to predict upcoming actions in areas of the dorsal and ventral visual stream during the preparatory phase preceding the action, while participants were lying still. Nineteen human participants (nine women) performed one of two actions towards an object with their eyes open or closed. Despite the well-known role of ventral stream areas in visual recognition tasks and the specialization of dorsal stream areas in somato-motor processes, we decoded action intention in areas of both streams based on visual, memory-based and somato-motor signals. Interestingly, we could reliably decode action intention in absence of visual information based on neural activity evoked when visual information was available, and vice-versa. Our results show a similar visual, memory and somato-motor representation of action planning in dorsal and ventral visual stream areas that allows predicting action intention across domains, regardless of the availability of visual information.


2014 ◽  
Vol 14 (10) ◽  
pp. 985-985
Author(s):  
R. Lafer-Sousa ◽  
A. Kell ◽  
A. Takahashi ◽  
J. Feather ◽  
B. Conway ◽  
...  

eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Thomas SA Wallis ◽  
Christina M Funke ◽  
Alexander S Ecker ◽  
Leon A Gatys ◽  
Felix A Wichmann ◽  
...  

We subjectively perceive our visual field with high fidelity, yet peripheral distortions can go unnoticed and peripheral objects can be difficult to identify (crowding). Prior work showed that humans could not discriminate images synthesised to match the responses of a mid-level ventral visual stream model when information was averaged in receptive fields with a scaling of about half their retinal eccentricity. This result implicated ventral visual area V2, approximated ‘Bouma’s Law’ of crowding, and has subsequently been interpreted as a link between crowding zones, receptive field scaling, and our perceptual experience. However, this experiment never assessed natural images. We find that humans can easily discriminate real and model-generated images at V2 scaling, requiring scales at least as small as V1 receptive fields to generate metamers. We speculate that explaining why scenes look as they do may require incorporating segmentation and global organisational constraints in addition to local pooling.


Sign in / Sign up

Export Citation Format

Share Document