A Computational Model of the Perceived Velocity of Moving Plaids

Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 87-87
Author(s):  
I Lamouret ◽  
V Cornilleau-Pérès ◽  
J Droulez

Local motion detection mechanisms generally lead to one component of the optic flow becoming indeterminate. One way to solve this ‘aperture problem’ is to compute the optic flow which minimises some smoothing constraint. With iterative schemes the computed velocity array is suboptimal relative to the constraint until the process has converged. Under the original assumption that the iteration rate is sufficiently low to allow the perception of suboptimal flows at short stimulus durations, iterative gradient models give an accurate description of biases in the perception of tilted line velocity. We examine whether this approach can be applied to moving sinusoidal plaids. Our simulations are in agreement with a number of psychophysical results on both speed and direction perception. In particular we show that the effect of stimulus duration on the perceived direction of type II plaids [Yo and Wilson, 1992 Vision Research32(1)] can be accounted for without recourse to second-order mechanisms. The effects of contrast and component directions on the evolution rate of this bias are well reproduced. The model also successfully describes the effect of spatial frequency, and data obtained with gratings. These results suggest that iterative gradient schemes can model the dynamics of interactions between local velocity detectors, as revealed by psychophysical experiments with lines and plaids.

2001 ◽  
Vol 85 (2) ◽  
pp. 724-734 ◽  
Author(s):  
Holger G. Krapp ◽  
Roland Hengstenberg ◽  
Martin Egelhaaf

Integrating binocular motion information tunes wide-field direction-selective neurons in the fly optic lobe to respond preferentially to specific optic flow fields. This is shown by measuring the local preferred directions (LPDs) and local motion sensitivities (LMSs) at many positions within the receptive fields of three types of anatomically identifiable lobula plate tangential neurons: the three horizontal system (HS) neurons, the two centrifugal horizontal (CH) neurons, and three heterolateral connecting elements. The latter impart to two of the HS and to both CH neurons a sensitivity to motion from the contralateral visual field. Thus in two HS neurons and both CH neurons, the response field comprises part of the ipsi- and contralateral visual hemispheres. The distributions of LPDs within the binocular response fields of each neuron show marked similarities to the optic flow fields created by particular types of self-movements of the fly. Based on the characteristic distributions of local preferred directions and motion sensitivities within the response fields, the functional role of the respective neurons in the context of behaviorally relevant processing of visual wide-field motion is discussed.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 303-303
Author(s):  
K A Owen ◽  
J E Raymond ◽  
P Thompson

Local motion signals can be pooled to detect the direction of coherent motion in random-dot kinematograms (RDKs) having a high proportion of random noise. Noise type does not appear to affect direction discrimination in such displays (Scase et al, 1996 Vision Research36 2579 – 2586). We have observed that RDKs with low coherence yet an obvious global direction appear to move slower than similar RDKs with high coherence. Using judgements of relative speed between RDKs containing different proportions of noise in a 2AFC paradigm we have quantified this effect and sought to determine if the type of noise influences perceived velocity. Levels of coherence in all dot patterns were well above the thresholds for directional judgements. Dots were assigned as ‘Noise vs Signal’ randomly on each frame of the RDK. Noise dots were either of type ‘random position’ or of type ‘random walk’. Position noise dots were randomly repositioned within the area of the display on each frame and had an isotropic distribution of directions and variable speeds. Random-walk dots moved at the same speed on successive frames (their displacement matched to that of the signal dots) but in a randomly chosen direction. The two noise types yielded statistically different results. In RDKs containing random-walk noise, decreasing the coherence of the display (30% signal, 70% noise) reduced perceived velocity (on average to 0.75 of the actual velocity), while increasing the coherence of the display increased perceived velocity until at high coherence levels (80% signal, 20% noise) the perceived velocity approximated the veridical velocity (on average 0.96). The proportion of position noise in a display had no effect on perceived velocity. These basic results are discussed in relation to current models of motion detectors and velocity perception.


2004 ◽  
Vol 4 (8) ◽  
pp. 609-609
Author(s):  
J. Duijnhouwer ◽  
J. A. Beintema ◽  
R. J. A. Wezel ◽  
A. V. Berg
Keyword(s):  

Vision ◽  
2019 ◽  
Vol 3 (4) ◽  
pp. 64
Author(s):  
Martin Lages ◽  
Suzanne Heron

Like many predators, humans have forward-facing eyes that are set a short distance apart so that an extensive region of the visual field is seen from two different points of view. The human visual system can establish a three-dimensional (3D) percept from the projection of images into the left and right eye. How the visual system integrates local motion and binocular depth in order to accomplish 3D motion perception is still under investigation. Here, we propose a geometric-statistical model that combines noisy velocity constraints with a spherical motion prior to solve the aperture problem in 3D. In two psychophysical experiments, it is shown that instantiations of this model can explain how human observers disambiguate 3D line motion direction behind a circular aperture. We discuss the implications of our results for the processing of motion and dynamic depth in the visual system.


i-Perception ◽  
2017 ◽  
Vol 8 (3) ◽  
pp. 204166951770820 ◽  
Author(s):  
Diederick C. Niehorster ◽  
Li Li

How do we perceive object motion during self-motion using visual information alone? Previous studies have reported that the visual system can use optic flow to identify and globally subtract the retinal motion component resulting from self-motion to recover scene-relative object motion, a process called flow parsing. In this article, we developed a retinal motion nulling method to directly measure and quantify the magnitude of flow parsing (i.e., flow parsing gain) in various scenarios to examine the accuracy and tuning of flow parsing for the visual perception of object motion during self-motion. We found that flow parsing gains were below unity for all displays in all experiments; and that increasing self-motion and object motion speed did not alter flow parsing gain. We conclude that visual information alone is not sufficient for the accurate perception of scene-relative motion during self-motion. Although flow parsing performs global subtraction, its accuracy also depends on local motion information in the retinal vicinity of the moving object. Furthermore, the flow parsing gain was constant across common self-motion or object motion speeds. These results can be used to inform and validate computational models of flow parsing.


2006 ◽  
Vol 23 (1) ◽  
pp. 115-126 ◽  
Author(s):  
IAN R. WINSHIP ◽  
DOUGLAS R.W. WYLIE

Neurons sensitive to optic flow patterns have been recorded in the the olivo-vestibulocerebellar pathway and extrastriate visual cortical areas in vertebrates, and in the visual neuropile of invertebrates. The complex spike activity (CSA) of Purkinje cells in the vestibulocerebellum (VbC) responds best to patterns of optic flow that result from either self-rotation or self-translation. Previous studies have suggested that these neurons have a receptive-field (RF) structure that “approximates” the preferred optic flowfield with a “bipartite” organization. Contrasting this, studies in invertebrate species indicate that optic flow sensitive neurons are precisely tuned to their preferred flowfield, such that the local motion sensitivities and local preferred directions within their RFs precisely match the local motion in that region of the preferred flowfield. In this study, CSA in the VbC of pigeons was recorded in response to a set of complex computer-generated optic flow stimuli, similar to those used in previous studies of optic flow neurons in primate extrastriate visual cortex, to test whether the receptive field was of a precise or bipartite organization. We found that these RFs were not precisely tuned to optic flow patterns. Rather, we conclude that these neurons have a bipartite RF structure that approximates the preferred optic flowfield by pooling motion subunits of only a few different direction preferences.


2008 ◽  
Author(s):  
Alessandro Becciu ◽  
Hans C. van Assen ◽  
Luc Florack ◽  
Bart J. Janssen ◽  
Bart Ter haar romeny

Heart illnesses influence the functioning of the cardiac muscle and are the major causes of death in the world. Optic flow methods are essential tools to assess and quantify the contraction of the cardiac walls, but are hampered by the aperture problem. Harmonic phase (HARP) techniques measure the phase in magnetic resonance (MR) tagged images. Due to the regular geometry, patterns generated by a combination of HARPs and sine HARPs represent a suitable framework to extract landmark features. In this paper we introduce a new aperture-problem free method to study the cardiac motion by tracking multi-scale features such as maxima, minima, saddles and corners, on HARP and sine HARP tagged images.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 84-84
Author(s):  
W H A Beaudot

A neuromorphic model of the retino-cortical motion processing stream is proposed which incorporates both feedforward and feedback mechanisms. The feedforward stream consists of motion integration from the retina to the MT area. Retinal spatiotemporal filtering provides X-like and Y-like visual inputs with band-pass characteristics to the V1 area (Beaudot, 1996 Perception25 Supplement, 30 – 31). V1 direction-selective cells respond to local motion resulting from nonlinear interactions between retinal inputs. MT direction-selective cells respond to global motion resulting from spatial convergence and temporal integration of V1 signals. This feedforward stream provides a fine representation of local motion in V1 and a coarse representation of global motion in MT. However, it is unable to deal with the aperture problem. Solving this problem requires the adjunction of local constraints related to both smoothness and discontinuity of coherent motion, as well as some minimisation techniques to obtain the optimal solution. We propose a plausible neural substrate for this computation by incorporating excitatory intracortical feedbacks in V1 and their modulation by reciprocal connections from MT. The underlying enhancement or depression of V1 responses according to the strength of MT responses reflects changes in the spatiotemporal properties of the V1 receptive fields. This mechanism induces a dynamic competition between local and global motion representations in V1. On convergence of these dynamics, responses of V1 direction-selective cells provide a fine representation of ‘true’ motion, thus solving the aperture problem and allowing a figure - ground segregation based on coherent motion. The model is compatible with recent anatomical, physiological, and psychophysical evidence [Bullier et al, 1996 Journal de Physiologie (Paris)90 217 – 220].


Sign in / Sign up

Export Citation Format

Share Document