scholarly journals Fast responses to images of animate and inanimate objects in the nonhuman primate amygdala

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
E. Cleeren ◽  
I. D. Popivanov ◽  
W. Van Paesschen ◽  
Peter Janssen

Abstract Visual information reaches the amygdala through the various stages of the ventral visual stream. There is, however, evidence that a fast subcortical pathway for the processing of emotional visual input exists. To explore the presence of this pathway in primates, we recorded local field potentials in the amygdala of four rhesus monkeys during a passive fixation task showing images of ten object categories. Additionally, in one of the monkeys we also obtained multi-unit spiking activity during the same task. We observed remarkably fast medium and high gamma responses in the amygdala of the four monkeys. These responses were selective for the different stimulus categories, showed within-category selectivity, and peaked as early as 60 ms after stimulus onset. Multi-unit responses in the amygdala were lagging the gamma responses by about 40 ms. Thus, these observations add further evidence that selective visual information reaches the amygdala of nonhuman primates through a very fast route.

2018 ◽  
Author(s):  
Simona Monaco ◽  
Giulia Malfatti ◽  
Alessandro Zendron ◽  
Elisa Pellencin ◽  
Luca Turella

AbstractPredictions of upcoming movements are based on several types of neural signals that span the visual, somatosensory, motor and cognitive system. Thus far, pre-movement signals have been investigated while participants viewed the object to be acted upon. Here, we studied the contribution of information other than vision to the classification of preparatory signals for action, even in absence of online visual information. We used functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) to test whether the neural signals evoked by visual, memory-based and somato-motor information can be reliably used to predict upcoming actions in areas of the dorsal and ventral visual stream during the preparatory phase preceding the action, while participants were lying still. Nineteen human participants (nine women) performed one of two actions towards an object with their eyes open or closed. Despite the well-known role of ventral stream areas in visual recognition tasks and the specialization of dorsal stream areas in somato-motor processes, we decoded action intention in areas of both streams based on visual, memory-based and somato-motor signals. Interestingly, we could reliably decode action intention in absence of visual information based on neural activity evoked when visual information was available, and vice-versa. Our results show a similar visual, memory and somato-motor representation of action planning in dorsal and ventral visual stream areas that allows predicting action intention across domains, regardless of the availability of visual information.


2020 ◽  
Vol 10 (9) ◽  
pp. 602
Author(s):  
Yibo Cui ◽  
Chi Zhang ◽  
Kai Qiao ◽  
Linyuan Wang ◽  
Bin Yan ◽  
...  

Representation invariance plays a significant role in the performance of deep convolutional neural networks (CNNs) and human visual information processing in various complicated image-based tasks. However, there has been abounding confusion concerning the representation invariance mechanisms of the two sophisticated systems. To investigate their relationship under common conditions, we proposed a representation invariance analysis approach based on data augmentation technology. Firstly, the original image library was expanded by data augmentation. The representation invariances of CNNs and the ventral visual stream were then studied by comparing the similarities of the corresponding layer features of CNNs and the prediction performance of visual encoding models based on functional magnetic resonance imaging (fMRI) before and after data augmentation. Our experimental results suggest that the architecture of CNNs, combinations of convolutional and fully-connected layers, developed representation invariance of CNNs. Remarkably, we found representation invariance belongs to all successive stages of the ventral visual stream. Hence, the internal correlation between CNNs and the human visual system in representation invariance was revealed. Our study promotes the advancement of invariant representation of computer vision and deeper comprehension of the representation invariance mechanism of human visual information processing.


2018 ◽  
Vol 120 (3) ◽  
pp. 926-941 ◽  
Author(s):  
Dzmitry A. Kaliukhovich ◽  
Hans Op de Beeck

Similar to primates, visual cortex in rodents appears to be organized in two distinct hierarchical streams. However, there is still little known about how visual information is processed along those streams in rodents. In this study, we examined how repetition suppression and position and clutter tolerance of the neuronal representations evolve along the putative ventral visual stream in rats. To address this question, we recorded multiunit spiking activity in primary visual cortex (V1) and the more downstream visual laterointermediate (LI) area of head-restrained Long-Evans rats. We employed a paradigm reminiscent of the continuous carry-over design used in human neuroimaging. In both areas, stimulus repetition attenuated the early phase of the neuronal response to the repeated stimulus, with this response suppression being greater in area LI. Furthermore, stimulus preferences were more similar across positions (position tolerance) in area LI than in V1, even though the absolute responses in both areas were very sensitive to changes in position. In contrast, the neuronal representations in both areas were equally good at tolerating the presence of limited visual clutter, as modeled by the presentation of a single flank stimulus. When probing tolerance of the neuronal representations with stimulus-specific adaptation, we detected no position tolerance in either examined brain area, whereas, on the contrary, we revealed clutter tolerance in both areas. Overall, our data demonstrate similarities and discrepancies in processing of visual information along the ventral visual stream of rodents and primates. Moreover, our results stress caution in using neuronal adaptation to probe tolerance of the neuronal representations. NEW & NOTEWORTHY Rodents are emerging as a popular animal model that complement primates for studying higher level visual functions. Similar to findings in primates, we demonstrate a greater repetition suppression and position tolerance of the neuronal representations in the downstream laterointermediate area of Long-Evans rats compared with primary visual cortex. However, we report no difference in the degree of clutter tolerance between the areas. These findings provide additional evidence for hierarchical processing of visual stimuli in rodents.


2017 ◽  
Author(s):  
Radoslaw M. Cichy ◽  
Nikolaus Kriegeskorte ◽  
Kamila M. Jozwik ◽  
Jasper J.F. van den Bosch ◽  
Ian Charest

1AbstractVision involves complex neuronal dynamics that link the sensory stream to behaviour. To capture the richness and complexity of the visual world and the behaviour it entails, we used an ecologically valid task with a rich set of real-world object images. We investigated how human brain activity, resolved in space with functional MRI and in time with magnetoencephalography, links the sensory stream to behavioural responses. We found that behaviour-related brain activity emerged rapidly in the ventral visual pathway within 200ms of stimulus onset. The link between stimuli, brain activity, and behaviour could not be accounted for by either category membership or visual features (as provided by an artificial deep neural network model). Our results identify behaviourally-relevant brain activity during object vision, and suggest that object representations guiding behaviour are complex and can neither be explained by visual features or semantic categories alone. Our findings support the view that visual representations in the ventral visual stream need to be understood in terms of their relevance to behaviour, and highlight the importance of complex behavioural assessment for human brain mapping.


2021 ◽  
Author(s):  
Talia Konkle ◽  
George A Alvarez

Anterior regions of the ventral visual stream have substantial information about object categories, prompting theories that category-level forces are critical for shaping visual representation. The strong correspondence between category-supervised deep neural networks and ventral stream representation supports this view, but does not provide a viable learning model, as these deepnets rely upon millions of labeled examples. Here we present a fully self-supervised model which instead learns to represent individual images, where views of the same image are embedded nearby in a low-dimensional feature space, distinctly from other recently encountered views. We find category information implicitly emerges in the feature space, and critically that these models achieve parity with category-supervised models in predicting the hierarchical structure of brain responses across the human ventral visual stream. These results provide computational support for learning instance-level representation as a viable goal of the ventral stream, offering an alternative to the category-based framework that has been dominant in visual cognitive neuroscience.


2017 ◽  
Author(s):  
Mikio Inagaki ◽  
Ichiro Fujita

SummaryThe amygdala plays a critical role in detecting potential danger through sensory input [1, 2]. In the primate visual system, a subcortical pathway through the superior colliculus and the pulvinar is thought to provide the amygdala with rapid and coarse visual information about facial emotions [3–6]. A recent electrophysiological study in human patients supported this hypothesis by showing that intracranial event-related potentials discriminated fearful faces from other faces very quickly (within ∼74 ms) [7]. However, several aspects of the hypothesis remain debatable [8]. Critically, evidence for short-latency, emotion-selective responses from individual amygdala neurons is lacking [9–12], and even if this type of response existed, how it might contribute to stimulus detection is unclear. Here, we addressed these issues in the monkey amygdala and found that ensemble responses of single neurons carry robust information about emotional faces— especially threatening ones—within ∼50 ms after stimulus onset. Similar rapid response was not found in the temporal cortex from which the amygdala receives cortical inputs [13], suggesting a subcortical origin. Additionally, we found that the rapid amygdala response contained excitatory and suppressive components. The early excitatory component might be useful for quickly sending signals to downstream areas. In contrast, the rapid suppressive component sharpened the rising phase of later, sustained excitatory input (presumably from the temporal cortex) and might therefore improve processing of emotional faces over time. We thus propose that these two amygdala responses that originate from the subcortical pathway play dual roles in threat detection.


2014 ◽  
Vol 111 (1) ◽  
pp. 91-102 ◽  
Author(s):  
Leyla Isik ◽  
Ethan M. Meyers ◽  
Joel Z. Leibo ◽  
Tomaso Poggio

The human visual system can rapidly recognize objects despite transformations that alter their appearance. The precise timing of when the brain computes neural representations that are invariant to particular transformations, however, has not been mapped in humans. Here we employ magnetoencephalography decoding analysis to measure the dynamics of size- and position-invariant visual information development in the ventral visual stream. With this method we can read out the identity of objects beginning as early as 60 ms. Size- and position-invariant visual information appear around 125 ms and 150 ms, respectively, and both develop in stages, with invariance to smaller transformations arising before invariance to larger transformations. Additionally, the magnetoencephalography sensor activity localizes to neural sources that are in the most posterior occipital regions at the early decoding times and then move temporally as invariant information develops. These results provide previously unknown latencies for key stages of human-invariant object recognition, as well as new and compelling evidence for a feed-forward hierarchical model of invariant object recognition where invariance increases at each successive visual area along the ventral stream.


2019 ◽  
Author(s):  
Sushrut Thorat

A mediolateral gradation in neural responses for images spanning animals to artificial objects is observed in the ventral temporal cortex (VTC). Which information streams drive this organisation is an ongoing debate. Recently, in Proklova et al. (2016), the visual shape and category (“animacy”) dimensions in a set of stimuli were dissociated using a behavioural measure of visual feature information. fMRI responses revealed a neural cluster (extra-visual animacy cluster - xVAC) which encoded category information unexplained by visual feature information, suggesting extra-visual contributions to the organisation in the ventral visual stream. We reassess these findings using Convolutional Neural Networks (CNNs) as models for the ventral visual stream. The visual features developed in the CNN layers can categorise the shape-matched stimuli from Proklova et al. (2016) in contrast to the behavioural measures used in the study. The category organisations in xVAC and VTC are explained to a large degree by the CNN visual feature differences, casting doubt over the suggestion that visual feature differences cannot account for the animacy organisation. To inform the debate further, we designed a set of stimuli with animal images to dissociate the animacy organisation driven by the CNN visual features from the degree of familiarity and agency (thoughtfulness and feelings). Preliminary results from a new fMRI experiment designed to understand the contribution of these non-visual features are presented.


1999 ◽  
Vol 11 (3) ◽  
pp. 300-311 ◽  
Author(s):  
Edmund T. Rolls ◽  
Martin J. Tovée ◽  
Stefano Panzeri

Backward masking can potentially provide evidence of the time needed for visual processing, a fundamental constraint that must be incorporated into computational models of vision. Although backward masking has been extensively used psychophysically, there is little direct evidence for the effects of visual masking on neuronal responses. To investigate the effects of a backward masking paradigm on the responses of neurons in the temporal visual cortex, we have shown that the response of the neurons is interrupted by the mask. Under conditions when humans can just identify the stimulus, with stimulus onset asynchronies (SOA) of 20 msec, neurons in macaques respond to their best stimulus for approximately 30 msec. We now quantify the information that is available from the responses of single neurons under backward masking conditions when two to six faces were shown. We show that the information available is greatly decreased as the mask is brought closer to the stimulus. The decrease is more marked than the decrease in firing rate because it is the selective part of the firing that is especially attenuated by the mask, not the spontaneous firing, and also because the neuronal response is more variable at short SOAs. However, even at the shortest SOA of 20 msec, the information available is on average 0.1 bits. This compares to 0.3 bits with only the 16-msec target stimulus shown and a typical value for such neurons of 0.4 to 0.5 bits with a 500-msec stimulus. The results thus show that considerable information is available from neuronal responses even under backward masking conditions that allow the neurons to have their main response in 30 msec. This provides evidence for how rapid the processing of visual information is in a cortical area and provides a fundamental constraint for understanding how cortical information processing operates.


Sign in / Sign up

Export Citation Format

Share Document