scholarly journals Non-uniform weighting of local motion inputs underlies dendritic computation in the fly visual system

2018 ◽  
Vol 8 (1) ◽  
Author(s):  
Ohad Dan ◽  
Elizabeth Hopp ◽  
Alexander Borst ◽  
Idan Segev
1989 ◽  
Vol 1 (1) ◽  
pp. 92-103 ◽  
Author(s):  
H. Taichi Wang ◽  
Bimal Mathur ◽  
Christof Koch

Computing motion on the basis of the time-varying image intensity is a difficult problem for both artificial and biological vision systems. We show how gradient models, a well-known class of motion algorithms, can be implemented within the magnocellular pathway of the primate's visual system. Our cooperative algorithm computes optical flow in two steps. In the first stage, assumed to be located in primary visual cortex, local motion is measured while spatial integration occurs in the second stage, assumed to be located in the middle temporal area (MT). The final optical flow is extracted in this second stage using population coding, such that the velocity is represented by the vector sum of neurons coding for motion in different directions. Our theory, relating the single-cell to the perceptual level, accounts for a number of psychophysical and electrophysiological observations and illusions.


1989 ◽  
Vol 146 (1) ◽  
pp. 115-139
Author(s):  
C. Koch ◽  
H. T. Wang ◽  
B. Mathur

Computing motion on the basis of the time-varying image intensity is a difficult problem for both artificial and biological vision systems. We will show how one well-known gradient-based computer algorithm for estimating visual motion can be implemented within the primate's visual system. This relaxation algorithm computes the optical flow field by minimizing a variational functional of a form commonly encountered in early vision, and is performed in two steps. In the first stage, local motion is computed, while in the second stage spatial integration occurs. Neurons in the second stage represent the optical flow field via a population-coding scheme, such that the vector sum of all neurons at each location codes for the direction and magnitude of the velocity at that location. The resulting network maps onto the magnocellular pathway of the primate visual system, in particular onto cells in the primary visual cortex (V1) as well as onto cells in the middle temporal area (MT). Our algorithm mimics a number of psychophysical phenomena and illusions (perception of coherent plaids, motion capture, motion coherence) as well as electrophysiological recordings. Thus, a single unifying principle ‘the final optical flow should be as smooth as possible’ (except at isolated motion discontinuities) explains a large number of phenomena and links single-cell behavior with perception and computational theory.


Vision ◽  
2019 ◽  
Vol 3 (4) ◽  
pp. 64
Author(s):  
Martin Lages ◽  
Suzanne Heron

Like many predators, humans have forward-facing eyes that are set a short distance apart so that an extensive region of the visual field is seen from two different points of view. The human visual system can establish a three-dimensional (3D) percept from the projection of images into the left and right eye. How the visual system integrates local motion and binocular depth in order to accomplish 3D motion perception is still under investigation. Here, we propose a geometric-statistical model that combines noisy velocity constraints with a spherical motion prior to solve the aperture problem in 3D. In two psychophysical experiments, it is shown that instantiations of this model can explain how human observers disambiguate 3D line motion direction behind a circular aperture. We discuss the implications of our results for the processing of motion and dynamic depth in the visual system.


2012 ◽  
Vol 23 (12) ◽  
pp. 1534-1541 ◽  
Author(s):  
Zhicheng Lin ◽  
Sheng He

The visual system is intelligent—it is capable of recovering a coherent surface from an incomplete one, a feat known as perceptual completion or filling in. Traditionally, it has been assumed that surface features are interpolated in a way that resembles the fragmented parts. Using displays featuring four circular apertures, we showed in the study reported here that a distinct completed feature (horizontal motion) arises from local ones (oblique motions)—we term this process emergent filling in. Adaptation to emergent filling-in motion generated a dynamic motion aftereffect that was not due to spreading of local motion from the isolated apertures. The filling-in motion aftereffect occurred in both modal and amodal completions, and it was modulated by selective attention. These findings highlight the importance of high-level interpolation processes in filling in and are consistent with the idea that during emergent filling in, the more cognitive-symbolic processes in later areas (e.g., the middle temporal visual area and the lateral occipital complex) provide important feedback signals to guide more isomorphic processes in earlier areas (V1 and V2).


Author(s):  
Max R. Dürsteler ◽  
Erika N. Lorincz

When we fixate the center of a rotating three-dimensional structure, such as a physically rotating wheel made out of sectors, which stereo cues are encoded with a static random-dot “texture,” a rather striking global motion illusion occurs: the rotating three-dimensional wheel appears as standing still (stereo rotation standstill). Even when using a dynamic (flickering) random-dot texture, it is still impossible to gain a percept of smooth rotation. However, local motion can still be clearly perceived. When the random-dot texture “overlaying” the wheel is also rotating, the concealed wheel is perceived as rotating at the same velocity as the texture, regardless of its velocity (stereo rotation capture). Stereo complex motion standstill and capture is shown to occur for other categories of complex motions such as expanding, contracting, and spiraling motions thus providing evidence for a dominance of luminance inputs over stereo inputs for complex motion detectors in our visual system.


2008 ◽  
Vol 276 (1658) ◽  
pp. 861-869 ◽  
Author(s):  
Peter Neri

The human visual system is remarkably sensitive to stimuli conveying actions, for example the fighting action between two agents. A central unresolved question is whether each agent is processed as a whole in one stage, or as subparts (e.g. limbs) that are assembled into an agent at a later stage. We measured the perceptual impact of perturbing an agent either by scrambling individual limbs while leaving the relationship between limbs unaffected or conversely by scrambling the relationship between limbs while leaving individual limbs unaffected. Our measurements differed for the two conditions, providing conclusive evidence against a one-stage model. The results were instead consistent with a two-stage processing pathway: an early bottom-up stage where local motion signals are integrated to reconstruct individual limbs (arms and legs), and a subsequent top-down stage where limbs are combined to represent whole agents.


2019 ◽  
Vol 121 (5) ◽  
pp. 1787-1797
Author(s):  
David Souto ◽  
Jayesha Chudasama ◽  
Dirk Kerzel ◽  
Alan Johnston

Smooth pursuit eye movements (pursuit) are used to minimize the retinal motion of moving objects. During pursuit, the pattern of motion on the retina carries not only information about the object movement but also reafferent information about the eye movement itself. The latter arises from the retinal flow of the stationary world in the direction opposite to the eye movement. To extract the global direction of motion of the tracked object and stationary world, the visual system needs to integrate ambiguous local motion measurements (i.e., the aperture problem). Unlike the tracked object, the stationary world’s global motion is entirely determined by the eye movement and thus can be approximately derived from motor commands sent to the eye (i.e., from an efference copy). Because retinal motion opposite to the eye movement is dominant during pursuit, different motion integration mechanisms might be used for retinal motion in the same direction and opposite to pursuit. To investigate motion integration during pursuit, we tested direction discrimination of a brief change in global object motion. The global motion stimulus was a circular array of small static apertures within which one-dimensional gratings moved. We found increased coherence thresholds and a qualitatively different reflexive ocular tracking for global motion opposite to pursuit. Both effects suggest reduced sampling of motion opposite to pursuit, which results in an impaired ability to extract coherence in motion signals in the reafferent direction. We suggest that anisotropic motion integration is an adaptation to asymmetric retinal motion patterns experienced during pursuit eye movements. NEW & NOTEWORTHY This study provides a new understanding of how the visual system achieves coherent perception of an object’s motion while the eyes themselves are moving. The visual system integrates local motion measurements to create a coherent percept of object motion. An analysis of perceptual judgments and reflexive eye movements to a brief change in an object’s global motion confirms that the visual and oculomotor systems pick fewer samples to extract global motion opposite to the eye movement.


2005 ◽  
Vol 93 (4) ◽  
pp. 2240-2253 ◽  
Author(s):  
Holger G. Krapp ◽  
Fabrizio Gabbiani

The lobula giant movement detector (LGMD) in the locust visual system and its target neuron, the descending contralateral movement detector (DCMD), respond to approaching objects looming on a collision course with the animal. They thus provide a good model to study the cellular and network mechanisms underlying the sensitivity to this specific class of behaviorally relevant stimuli. We determined over an entire locust eye the density distribution of optical axes describing the spatial organization of local inputs to the visual system and compared it with the sensitivity distribution of the LGMD/DCMD to local motion stimuli. The density of optical axes peaks in the equatorial region of the frontal eye. Local motion sensitivity, however, peaks in the equatorial region of the caudolateral visual field and only correlates positively with the dorso-ventral density of optical axes. On local stimulation, both the velocity tuning and the response latency of the LGMD/DCMD depend on stimulus position within the visual field. Spatial and temporal integration experiments in which several local motion stimuli were activated either simultaneously or at fixed delays reveal that the LGMD processes local motion in a strongly sublinear way. Thus the neuron's integration properties seem to depend on several factors including its dendritic morphology, the local characteristics of afferent fiber inputs, and inhibition mediated by different pathways or by voltage-gated conductances. Our study shows that the selectivity of this looming sensitive neuron to approaching objects relies on more complex biophysical mechanisms than previously thought.


2020 ◽  
Author(s):  
Samson Chengetanai ◽  
Adhil Bhagwandin ◽  
Mads F. Bertelsen ◽  
Therese Hård ◽  
Patrick R. Hof ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document