invariant representations
Recently Published Documents


TOTAL DOCUMENTS

146
(FIVE YEARS 42)

H-INDEX

25
(FIVE YEARS 5)

Author(s):  
Xinyue Xu ◽  
Kai Lv ◽  
Xingye Dong ◽  
Sheng Han ◽  
Youfang Lin

2021 ◽  
Author(s):  
Laura Bella Naumann ◽  
Joram Keijser ◽  
Henning Sprekeler

Sensory systems reliably process incoming stimuli in spite of changes in context. Most recent models accredit this context invariance to an extraction of increasingly complex sensory features in hierarchical feedforward networks. Here, we study how context-invariant representations can be established by feedback rather than feedforward processing. We show that feedforward neural networks modulated by feedback can dynamically generate invariant sensory representations. The required feedback can be implemented as a slow and spatially diffuse gain modulation. The invariance is not present on the level of individual neurons, but emerges only on the population level. Mechanistically, the feedback modulation dynamically reorients the manifold of neural activity and thereby maintains an invariant neural subspace in spite of contextual variations. Our results highlight the importance of population-level analyses for understanding the role of feedback in flexible sensory processing.


2021 ◽  
Vol 21 (9) ◽  
pp. 2141
Author(s):  
Elahe' Yargholi ◽  
GA Hossein-zadeh ◽  
Maryam Vaziri-Pashkam

Author(s):  
Victor Bouvier ◽  
Philippe Very ◽  
Clément Chastagnol ◽  
Myriam Tami ◽  
Céline Hudelot

Domain Invariant Representations (IR) has improved drastically the transferability of representations from a labelled source domain to a new and unlabelled target domain. Unsupervised Domain Adaptation (UDA) in presence of label shift remains an open problem. To this purpose, we present a bound of the target risk which incorporates both weights and invariant representations. Our theoretical analysis highlights the role of inductive bias in aligning distributions across domains. We illustrate it on standard benchmarks by proposing a new learning procedure for UDA. We observed empirically that weak inductive bias makes adaptation robust to label shift. The elaboration of stronger inductive bias is a promising direction for new UDA algorithms.


2021 ◽  
Vol 15 ◽  
Author(s):  
Edmund T. Rolls

First, neurophysiological evidence for the learning of invariant representations in the inferior temporal visual cortex is described. This includes object and face representations with invariance for position, size, lighting, view and morphological transforms in the temporal lobe visual cortex; global object motion in the cortex in the superior temporal sulcus; and spatial view representations in the hippocampus that are invariant with respect to eye position, head direction, and place. Second, computational mechanisms that enable the brain to learn these invariant representations are proposed. For the ventral visual system, one key adaptation is the use of information available in the statistics of the environment in slow unsupervised learning to learn transform-invariant representations of objects. This contrasts with deep supervised learning in artificial neural networks, which uses training with thousands of exemplars forced into different categories by neuronal teachers. Similar slow learning principles apply to the learning of global object motion in the dorsal visual system leading to the cortex in the superior temporal sulcus. The learning rule that has been explored in VisNet is an associative rule with a short-term memory trace. The feed-forward architecture has four stages, with convergence from stage to stage. This type of slow learning is implemented in the brain in hierarchically organized competitive neuronal networks with convergence from stage to stage, with only 4-5 stages in the hierarchy. Slow learning is also shown to help the learning of coordinate transforms using gain modulation in the dorsal visual system extending into the parietal cortex and retrosplenial cortex. Representations are learned that are in allocentric spatial view coordinates of locations in the world and that are independent of eye position, head direction, and the place where the individual is located. This enables hippocampal spatial view cells to use idiothetic, self-motion, signals for navigation when the view details are obscured for short periods.


2021 ◽  
pp. 1-1
Author(s):  
Dawei Zhou ◽  
Nannan Wang ◽  
Chunlei Peng ◽  
Yi Yu ◽  
Xi Yang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document