scholarly journals Reconstructing feedback representations in ventral visual pathway with a generative adversarial autoencoder

2020 ◽  
Author(s):  
Haider Al-Tahan ◽  
Yalda Mohsenzadeh

AbstractWhile vision evokes a dense network of feedforward and feedback neural processes in the brain, visual processes are primarily modeled with feedforward hierarchical neural networks, leaving the computational role of feedback processes poorly understood. Here, we developed a generative autoencoder neural network model and adversarially trained it on a categorically diverse data set of images. We hypothesized that the feedback processes in the ventral visual pathway can be represented by reconstruction of the visual information performed by the generative model. We compared representational similarity of the activity patterns in the proposed model with temporal (magnetoencephalography) and spatial (functional magnetic resonance imaging) visual brain responses. The proposed generative model identified two segregated neural dynamics in the visual brain. A temporal hierarchy of processes transforming low level visual information into high level semantics in the feedforward sweep, and a temporally later dynamics of inverse processes reconstructing low level visual information from a high level latent representation in the feedback sweep. Our results append to previous studies on neural feedback processes by presenting a new insight into the algorithmic function and the information carried by the feedback processes in the ventral visual pathway.Author summaryIt has been shown that the ventral visual cortex consists of a dense network of regions with feedforward and feedback connections. The feedforward path processes visual inputs along a hierarchy of cortical areas that starts in early visual cortex (an area tuned to low level features e.g. edges/corners) and ends in inferior temporal cortex (an area that responds to higher level categorical contents e.g. faces/objects). Alternatively, the feedback connections modulate neuronal responses in this hierarchy by broadcasting information from higher to lower areas. In recent years, deep neural network models which are trained on object recognition tasks achieved human-level performance and showed similar activation patterns to the visual brain. In this work, we developed a generative neural network model that consists of encoding and decoding sub-networks. By comparing this computational model with the human brain temporal (magnetoencephalography) and spatial (functional magnetic resonance imaging) response patterns, we found that the encoder processes resemble the brain feedforward processing dynamics and the decoder shares similarity with the brain feedback processing dynamics. These results provide an algorithmic insight into the spatiotemporal dynamics of feedforward and feedback processes in biological vision.

2021 ◽  
Vol 17 (3) ◽  
pp. e1008775
Author(s):  
Haider Al-Tahan ◽  
Yalda Mohsenzadeh

While vision evokes a dense network of feedforward and feedback neural processes in the brain, visual processes are primarily modeled with feedforward hierarchical neural networks, leaving the computational role of feedback processes poorly understood. Here, we developed a generative autoencoder neural network model and adversarially trained it on a categorically diverse data set of images. We hypothesized that the feedback processes in the ventral visual pathway can be represented by reconstruction of the visual information performed by the generative model. We compared representational similarity of the activity patterns in the proposed model with temporal (magnetoencephalography) and spatial (functional magnetic resonance imaging) visual brain responses. The proposed generative model identified two segregated neural dynamics in the visual brain. A temporal hierarchy of processes transforming low level visual information into high level semantics in the feedforward sweep, and a temporally later dynamics of inverse processes reconstructing low level visual information from a high level latent representation in the feedback sweep. Our results append to previous studies on neural feedback processes by presenting a new insight into the algorithmic function and the information carried by the feedback processes in the ventral visual pathway.


2018 ◽  
Vol 30 (11) ◽  
pp. 1590-1605 ◽  
Author(s):  
Alex Clarke ◽  
Barry J. Devereux ◽  
Lorraine K. Tyler

Object recognition requires dynamic transformations of low-level visual inputs to complex semantic representations. Although this process depends on the ventral visual pathway, we lack an incremental account from low-level inputs to semantic representations and the mechanistic details of these dynamics. Here we combine computational models of vision with semantics and test the output of the incremental model against patterns of neural oscillations recorded with magnetoencephalography in humans. Representational similarity analysis showed visual information was represented in low-frequency activity throughout the ventral visual pathway, and semantic information was represented in theta activity. Furthermore, directed connectivity showed visual information travels through feedforward connections, whereas visual information is transformed into semantic representations through feedforward and feedback activity, centered on the anterior temporal lobe. Our research highlights that the complex transformations between visual and semantic information is driven by feedforward and recurrent dynamics resulting in object-specific semantics.


2018 ◽  
Author(s):  
Alex Clarke ◽  
Barry J. Devereux ◽  
Lorraine K. Tyler

AbstractObject recognition requires dynamic transformations of low-level visual inputs to complex semantic representations. While this process depends on the ventral visual pathway (VVP), we lack an incremental account from low-level inputs to semantic representations, and the mechanistic details of these dynamics. Here we combine computational models of vision with semantics, and test the output of the incremental model against patterns of neural oscillations recorded with MEG in humans. Representational Similarity Analysis showed visual information was represented in alpha activity throughout the VVP, and semantic information was represented in theta activity. Furthermore, informational connectivity showed visual information travels through feedforward connections, while visual information is transformed into semantic representations through feedforward and feedback activity, centered on the anterior temporal lobe. Our research highlights that the complex transformations between visual and semantic information is driven by feedforward and recurrent dynamics resulting in object-specific semantics.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
J. Brendan Ritchie ◽  
Hans Op de Beeck

Abstract A large number of neuroimaging studies have shown that information about object category can be decoded from regions of the ventral visual pathway. One question is how this information might be functionally exploited in the brain. In an attempt to help answer this question, some studies have adopted a neural distance-to-bound approach, and shown that distance to a classifier decision boundary through neural activation space can be used to predict reaction times (RT) on animacy categorization tasks. However, these experiments have not controlled for possible visual confounds, such as shape, in their stimulus design. In the present study we sought to determine whether, when animacy and shape properties are orthogonal, neural distance in low- and high-level visual cortex would predict categorization RTs, and whether a combination of animacy and shape distance might predict RTs when categories crisscrossed the two stimulus dimensions, and so were not linearly separable. In line with previous results, we found that RTs correlated with neural distance, but only for animate stimuli, with similar, though weaker, asymmetric effects for the shape and crisscrossing tasks. Taken together, these results suggest there is potential to expand the neural distance-to-bound approach to other divisions beyond animacy and object category.


2018 ◽  
Author(s):  
J.Brendan Ritchie ◽  
Hans Op de Beeck

A large number of neuroimaging studies have shown that information about object category can be decoded from regions of the ventral visual pathway. One question is how this information might be functionally exploited in the brain. In an attempt to answer this question, some studies have adopted a neural distance-to-bound approach, and shown that distance to a classifier decision boundary through neural activation space can be used to predict reaction times (RT) on animacy categorization tasks. However, these experiments have not controlled for possible visual confounds, such as shape, in their stimulus design. In the present study we sought to determine whether, when animacy and shape properties are orthogonal, neural distance in low- and high-level visual cortex would predict categorization RTs. We also investigated whether a combination of animacy and shape distance might predict RTs when categories crisscrossed the two stimulus dimensions, and so were not linearly separable. In line with previous results, we found that RTs correlated with neural distance, but only for animate stimuli, with similar, though weaker, asymmetric effects for the shape and crisscrossing tasks. Taken together, these results suggest there is potential to expand the neural distance-to-bound approach to other divisions beyond animacy and object category.


Author(s):  
Richard Stone ◽  
Minglu Wang ◽  
Thomas Schnieders ◽  
Esraa Abdelall

Human-robotic interaction system are increasingly becoming integrated into industrial, commercial and emergency service agencies. It is critical that human operators understand and trust automation when these systems support and even make important decisions. The following study focused on human-in-loop telerobotic system performing a reconnaissance operation. Twenty-four subjects were divided into groups based on level of automation (Low-Level Automation (LLA), and High-Level Automation (HLA)). Results indicated a significant difference between low and high word level of control in hit rate when permanent error occurred. In the LLA group, the type of error had a significant effect on the hit rate. In general, the high level of automation was better than the low level of automation, especially if it was more reliable, suggesting that subjects in the HLA group could rely on the automatic implementation to perform the task more effectively and more accurately.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yunjun Nam ◽  
Takayuki Sato ◽  
Go Uchida ◽  
Ekaterina Malakhova ◽  
Shimon Ullman ◽  
...  

AbstractHumans recognize individual faces regardless of variation in the facial view. The view-tuned face neurons in the inferior temporal (IT) cortex are regarded as the neural substrate for view-invariant face recognition. This study approximated visual features encoded by these neurons as combinations of local orientations and colors, originated from natural image fragments. The resultant features reproduced the preference of these neurons to particular facial views. We also found that faces of one identity were separable from the faces of other identities in a space where each axis represented one of these features. These results suggested that view-invariant face representation was established by combining view sensitive visual features. The face representation with these features suggested that, with respect to view-invariant face representation, the seemingly complex and deeply layered ventral visual pathway can be approximated via a shallow network, comprised of layers of low-level processing for local orientations and colors (V1/V2-level) and the layers which detect particular sets of low-level elements derived from natural image fragments (IT-level).


2019 ◽  
Vol 31 (6) ◽  
pp. 821-836 ◽  
Author(s):  
Elliot Collins ◽  
Erez Freud ◽  
Jana M. Kainerstorfer ◽  
Jiaming Cao ◽  
Marlene Behrmann

Although shape perception is primarily considered a function of the ventral visual pathway, previous research has shown that both dorsal and ventral pathways represent shape information. Here, we examine whether the shape-selective electrophysiological signals observed in dorsal cortex are a product of the connectivity to ventral cortex or are independently computed. We conducted multiple EEG studies in which we manipulated the input parameters of the stimuli so as to bias processing to either the dorsal or ventral visual pathway. Participants viewed displays of common objects with shape information parametrically degraded across five levels. We measured shape sensitivity by regressing the amplitude of the evoked signal against the degree of stimulus scrambling. Experiment 1, which included grayscale versions of the stimuli, served as a benchmark establishing the temporal pattern of shape processing during typical object perception. These stimuli evoked broad and sustained patterns of shape sensitivity beginning as early as 50 msec after stimulus onset. In Experiments 2 and 3, we calibrated the stimuli such that visual information was delivered primarily through parvocellular inputs, which mainly project to the ventral pathway, or through koniocellular inputs, which mainly project to the dorsal pathway. In the second and third experiments, shape sensitivity was observed, but in distinct spatio-temporal configurations from each other and from that elicited by grayscale inputs. Of particular interest, in the koniocellular condition, shape selectivity emerged earlier than in the parvocellular condition. These findings support the conclusion of distinct dorsal pathway computations of object shape, independent from the ventral pathway.


2007 ◽  
Vol 33 (2-3) ◽  
pp. 433-456 ◽  
Author(s):  
Adam J. Kolber

A neurologist with abdominal pain goes to see a gastroenterologist for treatment. The gastroenterologist asks the neurologist where it hurts. The neurologist replies, “In my head, of course.” Indeed, while we can feel pain throughout much of our bodies, pain signals undergo most of their processing in the brain. Using neuroimaging techniques like functional magnetic resonance imaging (“fMRI”) and positron emission tomography (“PET”), researchers have more precisely identified brain regions that enable us to experience physical pain. Certain regions of the brain's cortex, for example, increase in activation when subjects are exposed to painful stimuli. Furthermore, the amount of activation increases with the intensity of the painful stimulus. These findings suggest that we may be able to gain insight into the amount of pain a particular person is experiencing by non-invasively imaging his brain.Such insight could be particularly valuable in the courtroom where we often have no definitive medical evidence to prove or disprove claims about the existence and extent of pain symptoms.


Sign in / Sign up

Export Citation Format

Share Document