scholarly journals Active efficient coding explains the development of binocular vision and its failure in amblyopia

2020 ◽  
Vol 117 (11) ◽  
pp. 6156-6162
Author(s):  
Samuel Eckmann ◽  
Lukas Klimmasch ◽  
Bertram E. Shi ◽  
Jochen Triesch

The development of vision during the first months of life is an active process that comprises the learning of appropriate neural representations and the learning of accurate eye movements. While it has long been suspected that the two learning processes are coupled, there is still no widely accepted theoretical framework describing this joint development. Here, we propose a computational model of the development of active binocular vision to fill this gap. The model is based on a formulation of the active efficient coding theory, which proposes that eye movements as well as stimulus encoding are jointly adapted to maximize the overall coding efficiency. Under healthy conditions, the model self-calibrates to perform accurate vergence and accommodation eye movements. It exploits disparity cues to deduce the direction of defocus, which leads to coordinated vergence and accommodation responses. In a simulated anisometropic case, where the refraction power of the two eyes differs, an amblyopia-like state develops in which the foveal region of one eye is suppressed due to inputs from the other eye. After correcting for refractive errors, the model can only reach healthy performance levels if receptive fields are still plastic, in line with findings on a critical period for binocular vision development. Overall, our model offers a unifying conceptual framework for understanding the development of binocular vision.

2019 ◽  
Author(s):  
Samuel Eckmann ◽  
Lukas Klimmasch ◽  
Bertram E. Shi ◽  
Jochen Triesch

The development of vision during the first months of life is an active process that comprises the learning of appropriate neural representations and the learning of accurate eye movements. While it has long been suspected that the two learning processes are coupled, there is still no widely accepted theoretical framework describing this joint development. Here we propose a computational model of the development of active binocular vision to fill this gap. The model is based on a new formulation of the Active Efficient Coding theory, which proposes that eye movements, as well as stimulus encoding, are jointly adapted to maximize the overall coding efficiency. Under healthy conditions, the model self-calibrates to perform accurate vergence and accommodation eye movements. It exploits disparity cues to deduce the direction of defocus, which leads to co-ordinated vergence and accommodation responses. In a simulated anisometropic case, where the refraction power of the two eyes differs, an amblyopia-like state develops, in which the foveal region of one eye is suppressed due to inputs from the other eye. After correcting for refractive errors, the model can only reach healthy performance levels if receptive fields are still plastic, in line with findings on a critical period for binocular vision development. Overall, our model offers a unifying conceptual framework for understanding the development of binocular vision.Significance StatementBrains must operate in an energy-efficient manner. The efficient coding hypothesis states that sensory systems achieve this by adapting neural representations to the statistics of sensory input signals. Importantly, however, these statistics are shaped by the organism’s behavior and how it samples information from the environment. Therefore, optimal performance requires jointly optimizing neural representations and behavior, a theory called Active Efficient Coding. Here we test the plausibility of this theory by proposing a computational model of the development of binocular vision. The model explains the development of accurate binocular vision under healthy conditions. In the case of refractive errors, however, the model develops an amblyopia-like state and suggests conditions for successful treatment.


2017 ◽  
Author(s):  
Lukas Klimmasch ◽  
Alexander Lelais ◽  
Alexander Lichtenstein ◽  
Bertram E. Shi ◽  
Jochen Triesch

AbstractWe present a model for the autonomous learning of active binocular vision using a recently developed biome-chanical model of the human oculomotor system. The model is formulated in the Active Efficient Coding (AEC) framework, a recent generalization of classic efficient coding theories to active perception. The model simultaneously learns how to efficiently encode binocular images and how to generate accurate vergence eye movements that facilitate efficient encoding of the visual input. In order to resolve the redundancy problem arising from the actuation of the eyes through antagonistic muscle pairs, we consider the metabolic costs associated with eye movements. We show that the model successfully learns to trade off vergence accuracy against the associated metabolic costs, producing high fidelity vergence eye movements obeying Sherrington’s law of reciprocal innervation.


2020 ◽  
Author(s):  
Lukas Klimmasch ◽  
Johann Schneider ◽  
Alexander Lelais ◽  
Bertram E. Shi ◽  
Jochen Triesch

AbstractThe development of binocular vision is an active learning process comprising the development of disparity tuned neurons in visual cortex and the establishment of precise vergence control of the eyes. We present a computational model for the learning and self-calibration of active binocular vision based on the Active Efficient Coding framework, an extension of classic efficient coding ideas to active perception. Under normal rearing conditions, the model develops disparity tuned neurons and precise vergence control, allowing it to correctly interpret random dot stereogramms. Under altered rearing conditions modeled after neurophysiological experiments, the model qualitatively reproduces key experimental findings on changes in binocularity and disparity tuning. Furthermore, the model makes testable predictions regarding how altered rearing conditions impede the learning of precise vergence control. Finally, the model predicts a surprising new effect that impaired vergence control affects the statistics of orientation tuning in visual cortical neurons.


2021 ◽  
Vol 118 (39) ◽  
pp. e2105115118
Author(s):  
Na Young Jun ◽  
Greg D. Field ◽  
John Pearson

Many sensory systems utilize parallel ON and OFF pathways that signal stimulus increments and decrements, respectively. These pathways consist of ensembles or grids of ON and OFF detectors spanning sensory space. Yet, encoding by opponent pathways raises a question: How should grids of ON and OFF detectors be arranged to optimally encode natural stimuli? We investigated this question using a model of the retina guided by efficient coding theory. Specifically, we optimized spatial receptive fields and contrast response functions to encode natural images given noise and constrained firing rates. We find that the optimal arrangement of ON and OFF receptive fields exhibits a transition between aligned and antialigned grids. The preferred phase depends on detector noise and the statistical structure of the natural stimuli. These results reveal that noise and stimulus statistics produce qualitative shifts in neural coding strategies and provide theoretical predictions for the configuration of opponent pathways in the nervous system.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Lukas Klimmasch ◽  
Johann Schneider ◽  
Alexander Lelais ◽  
Maria Fronius ◽  
Bertram Emil Shi ◽  
...  

The development of binocular vision is an active learning process comprising the development of disparity tuned neurons in visual cortex and the establishment of precise vergence control of the eyes. We present a computational model for the learning and self-calibration of active binocular vision based on the Active Efficient Coding framework, an extension of classic efficient coding ideas to active perception. Under normal rearing conditions with naturalistic input, the model develops disparity tuned neurons and precise vergence control, allowing it to correctly interpret random dot stereograms. Under altered rearing conditions modeled after neurophysiological experiments, the model qualitatively reproduces key experimental findings on changes in binocularity and disparity tuning. Furthermore, the model makes testable predictions regarding how altered rearing conditions impede the learning of precise vergence control. Finally, the model predicts a surprising new effect that impaired vergence control affects the statistics of orientation tuning in visual cortical neurons.


2022 ◽  
Author(s):  
Divyansh Gupta ◽  
Wiktor Mlynarski ◽  
Olga Symonova ◽  
Jan Svaton ◽  
Maximilian Joesch

Visual systems have adapted to the structure of natural stimuli. In the retina, center-surround receptive fields (RFs) of retinal ganglion cells (RGCs) appear to efficiently encode natural sensory signals. Conventionally, it has been assumed that natural scenes are isotropic and homogeneous; thus, the RF properties are expected to be uniform across the visual field. However, natural scene statistics such as luminance and contrast are not uniform and vary significantly across elevation. Here, by combining theory and novel experimental approaches, we demonstrate that this inhomogeneity is exploited by RGC RFs across the entire retina to increase the coding efficiency. We formulated three predictions derived from the efficient coding theory: (i) optimal RFs should strengthen their surround from the dimmer ground to the brighter sky, (ii) RFs should simultaneously decrease their center size and (iii) RFs centered at the horizon should have a marked surround asymmetry due to a stark contrast drop-off. To test these predictions, we developed a new method to image high-resolution RFs of thousands of RGCs in individual retinas. We found that the RF properties match theoretical predictions, and consistently change their shape from dorsal to the ventral retina, with a distinct shift in the RF surround at the horizon. These effects are observed across RGC subtypes, which were thought to represent visual space homogeneously, indicating that functional retinal streams share common adaptations to visual scenes. Our work shows that RFs of mouse RGCs exploit the non-uniform, panoramic structure of natural scenes at a previously unappreciated scale, to increase coding efficiency.


1991 ◽  
Vol 6 (1) ◽  
pp. 3-13 ◽  
Author(s):  
James T. McIlwain

AbstractThis paper reviews evidence that the superior colliculus (SC) of the midbrain represents visual direction and certain aspects of saccadic eye movements in the distribution of activity across a population of cells. Accurate and precise eye movements appear to be mediated, in part at least, by cells of the SC that have large sensory receptive fields and/or discharge in association with a range of saccades. This implies that visual points or saccade targets are represented by patches rather than points of activity in the SC. Perturbation of the pattern of collicular discharge by focal inactivation modifies saccade amplitude and direction in a way consistent with distributed coding. Several models have been advanced to explain how such a code might be implemented in the colliculus. Evidence related to these hypotheses is examined and continuing uncertainties are identified.


Sign in / Sign up

Export Citation Format

Share Document