active binocular vision
Recently Published Documents


TOTAL DOCUMENTS

15
(FIVE YEARS 7)

H-INDEX

3
(FIVE YEARS 1)

eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Lukas Klimmasch ◽  
Johann Schneider ◽  
Alexander Lelais ◽  
Maria Fronius ◽  
Bertram Emil Shi ◽  
...  

The development of binocular vision is an active learning process comprising the development of disparity tuned neurons in visual cortex and the establishment of precise vergence control of the eyes. We present a computational model for the learning and self-calibration of active binocular vision based on the Active Efficient Coding framework, an extension of classic efficient coding ideas to active perception. Under normal rearing conditions with naturalistic input, the model develops disparity tuned neurons and precise vergence control, allowing it to correctly interpret random dot stereograms. Under altered rearing conditions modeled after neurophysiological experiments, the model qualitatively reproduces key experimental findings on changes in binocularity and disparity tuning. Furthermore, the model makes testable predictions regarding how altered rearing conditions impede the learning of precise vergence control. Finally, the model predicts a surprising new effect that impaired vergence control affects the statistics of orientation tuning in visual cortical neurons.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5271
Author(s):  
Di Fan ◽  
Yanyang Liu ◽  
Xiaopeng Chen ◽  
Fei Meng ◽  
Xilong Liu ◽  
...  

Three-dimensional (3D) triangulation based on active binocular vision has increasing amounts of applications in computer vision and robotics. An active binocular vision system with non-fixed cameras needs to calibrate the stereo extrinsic parameters online to perform 3D triangulation. However, the accuracy of stereo extrinsic parameters and disparity have a significant impact on 3D triangulation precision. We propose a novel eye gaze based 3D triangulation method that does not use stereo extrinsic parameters directly in order to reduce the impact. Instead, we drive both cameras to gaze at a 3D spatial point P at the optical center through visual servoing. Subsequently, we can obtain the 3D coordinates of P through the intersection of the two optical axes of both cameras. We have performed experiments to compare with previous disparity based work, named the integrated two-pose calibration (ITPC) method, using our robotic bionic eyes. The experiments show that our method achieves comparable results with ITPC.


2020 ◽  
Vol 117 (11) ◽  
pp. 6156-6162
Author(s):  
Samuel Eckmann ◽  
Lukas Klimmasch ◽  
Bertram E. Shi ◽  
Jochen Triesch

The development of vision during the first months of life is an active process that comprises the learning of appropriate neural representations and the learning of accurate eye movements. While it has long been suspected that the two learning processes are coupled, there is still no widely accepted theoretical framework describing this joint development. Here, we propose a computational model of the development of active binocular vision to fill this gap. The model is based on a formulation of the active efficient coding theory, which proposes that eye movements as well as stimulus encoding are jointly adapted to maximize the overall coding efficiency. Under healthy conditions, the model self-calibrates to perform accurate vergence and accommodation eye movements. It exploits disparity cues to deduce the direction of defocus, which leads to coordinated vergence and accommodation responses. In a simulated anisometropic case, where the refraction power of the two eyes differs, an amblyopia-like state develops in which the foveal region of one eye is suppressed due to inputs from the other eye. After correcting for refractive errors, the model can only reach healthy performance levels if receptive fields are still plastic, in line with findings on a critical period for binocular vision development. Overall, our model offers a unifying conceptual framework for understanding the development of binocular vision.


2020 ◽  
Author(s):  
Lukas Klimmasch ◽  
Johann Schneider ◽  
Alexander Lelais ◽  
Bertram E. Shi ◽  
Jochen Triesch

AbstractThe development of binocular vision is an active learning process comprising the development of disparity tuned neurons in visual cortex and the establishment of precise vergence control of the eyes. We present a computational model for the learning and self-calibration of active binocular vision based on the Active Efficient Coding framework, an extension of classic efficient coding ideas to active perception. Under normal rearing conditions, the model develops disparity tuned neurons and precise vergence control, allowing it to correctly interpret random dot stereogramms. Under altered rearing conditions modeled after neurophysiological experiments, the model qualitatively reproduces key experimental findings on changes in binocularity and disparity tuning. Furthermore, the model makes testable predictions regarding how altered rearing conditions impede the learning of precise vergence control. Finally, the model predicts a surprising new effect that impaired vergence control affects the statistics of orientation tuning in visual cortical neurons.


2019 ◽  
Author(s):  
Samuel Eckmann ◽  
Lukas Klimmasch ◽  
Bertram E. Shi ◽  
Jochen Triesch

The development of vision during the first months of life is an active process that comprises the learning of appropriate neural representations and the learning of accurate eye movements. While it has long been suspected that the two learning processes are coupled, there is still no widely accepted theoretical framework describing this joint development. Here we propose a computational model of the development of active binocular vision to fill this gap. The model is based on a new formulation of the Active Efficient Coding theory, which proposes that eye movements, as well as stimulus encoding, are jointly adapted to maximize the overall coding efficiency. Under healthy conditions, the model self-calibrates to perform accurate vergence and accommodation eye movements. It exploits disparity cues to deduce the direction of defocus, which leads to co-ordinated vergence and accommodation responses. In a simulated anisometropic case, where the refraction power of the two eyes differs, an amblyopia-like state develops, in which the foveal region of one eye is suppressed due to inputs from the other eye. After correcting for refractive errors, the model can only reach healthy performance levels if receptive fields are still plastic, in line with findings on a critical period for binocular vision development. Overall, our model offers a unifying conceptual framework for understanding the development of binocular vision.Significance StatementBrains must operate in an energy-efficient manner. The efficient coding hypothesis states that sensory systems achieve this by adapting neural representations to the statistics of sensory input signals. Importantly, however, these statistics are shaped by the organism’s behavior and how it samples information from the environment. Therefore, optimal performance requires jointly optimizing neural representations and behavior, a theory called Active Efficient Coding. Here we test the plausibility of this theory by proposing a computational model of the development of binocular vision. The model explains the development of accurate binocular vision under healthy conditions. In the case of refractive errors, however, the model develops an amblyopia-like state and suggests conditions for successful treatment.


2017 ◽  
Author(s):  
Lukas Klimmasch ◽  
Alexander Lelais ◽  
Alexander Lichtenstein ◽  
Bertram E. Shi ◽  
Jochen Triesch

AbstractWe present a model for the autonomous learning of active binocular vision using a recently developed biome-chanical model of the human oculomotor system. The model is formulated in the Active Efficient Coding (AEC) framework, a recent generalization of classic efficient coding theories to active perception. The model simultaneously learns how to efficiently encode binocular images and how to generate accurate vergence eye movements that facilitate efficient encoding of the visual input. In order to resolve the redundancy problem arising from the actuation of the eyes through antagonistic muscle pairs, we consider the metabolic costs associated with eye movements. We show that the model successfully learns to trade off vergence accuracy against the associated metabolic costs, producing high fidelity vergence eye movements obeying Sherrington’s law of reciprocal innervation.


Sign in / Sign up

Export Citation Format

Share Document