scholarly journals Accuracy and Tuning of Flow Parsing for Visual Perception of Object Motion During Self-Motion

i-Perception ◽  
2017 ◽  
Vol 8 (3) ◽  
pp. 204166951770820 ◽  
Author(s):  
Diederick C. Niehorster ◽  
Li Li

How do we perceive object motion during self-motion using visual information alone? Previous studies have reported that the visual system can use optic flow to identify and globally subtract the retinal motion component resulting from self-motion to recover scene-relative object motion, a process called flow parsing. In this article, we developed a retinal motion nulling method to directly measure and quantify the magnitude of flow parsing (i.e., flow parsing gain) in various scenarios to examine the accuracy and tuning of flow parsing for the visual perception of object motion during self-motion. We found that flow parsing gains were below unity for all displays in all experiments; and that increasing self-motion and object motion speed did not alter flow parsing gain. We conclude that visual information alone is not sufficient for the accurate perception of scene-relative motion during self-motion. Although flow parsing performs global subtraction, its accuracy also depends on local motion information in the retinal vicinity of the moving object. Furthermore, the flow parsing gain was constant across common self-motion or object motion speeds. These results can be used to inform and validate computational models of flow parsing.

2017 ◽  
Vol 17 (10) ◽  
pp. 427
Author(s):  
Mingyang Xie ◽  
Diederick Niehorster ◽  
Markus Lappe ◽  
Li Li

i-Perception ◽  
10.1068/if742 ◽  
2012 ◽  
Vol 3 (9) ◽  
pp. 742-742
Author(s):  
Diederick C Niehorster ◽  
Li Li

Vision ◽  
2019 ◽  
Vol 3 (2) ◽  
pp. 13
Author(s):  
Pearl Guterman ◽  
Robert Allison

When the head is tilted, an objectively vertical line viewed in isolation is typically perceived as tilted. We explored whether this shift also occurs when viewing global motion displays perceived as either object-motion or self-motion. Observers stood and lay left side down while viewing (1) a static line, (2) a random-dot display of 2-D (planar) motion or (3) a random-dot display of 3-D (volumetric) global motion. On each trial, the line orientation or motion direction were tilted from the gravitational vertical and observers indicated whether the tilt was clockwise or counter-clockwise from the perceived vertical. Psychometric functions were fit to the data and shifts in the point of subjective verticality (PSV) were measured. When the whole body was tilted, the perceived tilt of both a static line and the direction of optic flow were biased in the direction of the body tilt, demonstrating the so-called A-effect. However, we found significantly larger shifts for the static line than volumetric global motion as well as larger shifts for volumetric displays than planar displays. The A-effect was larger when the motion was experienced as self-motion compared to when it was experienced as object-motion. Discrimination thresholds were also more precise in the self-motion compared to object-motion conditions. Different magnitude A-effects for the line and motion conditions—and for object and self-motion—may be due to differences in combining of idiotropic (body) and vestibular signals, particularly so in the case of vection which occurs despite visual-vestibular conflict.


2021 ◽  
Author(s):  
Zhe-Xin Xu ◽  
Gregory C DeAngelis

There are two distinct sources of retinal image motion: motion of objects in the world and movement of the observer. In cases where an object moves in a scene and the eyes also move, a coordinate transformation that involves smooth eye movements and retinal motion will be needed in order to estimate object motion in world coordinates. More recently, interactions between retinal and eye velocity signals have also been suggested to generate depth selectivity from motion parallax (MP) in the macaque middle temporal (MT) area. We explored whether the nature of the interaction between eye and retinal velocities in MT neurons favors one of these two possibilities or a mixture of both. We analyzed responses of MT neurons to retinal and eye velocities in a viewing context in which the observer translates laterally while maintaining visual fixation on a world-fixed target. In this scenario, the depth of an object can be inferred from the ratio between retinal velocity and eye velocity, according to the motion-pursuit law. Previous studies have shown that MT responses to retinal motion are gain-modulated by the direction of eye movement, suggesting a potential mechanism for depth tuning from MP. However, our analysis of the joint tuning profile for retinal and eye velocities reveals that some MT neurons show a partial coordinate transformation toward head coordinates. We formalized a series of computational models to predict neural spike trains as well as selectivity for depth, and we used factorial model comparisons to quantify the relative importance of each model component. Our findings for many MT neurons reveal that the data are equally well explained by gain modulation or a partial coordinate transformation toward head coordinates, although some responses can only be well fit by the coordinate transform model. Our results highlight the potential role of MT neurons in representing multiple higher-level sensory variables, including depth from MP and object motion in the world.


2012 ◽  
Vol 108 (3) ◽  
pp. 794-801 ◽  
Author(s):  
Velia Cardin ◽  
Lara Hemsworth ◽  
Andrew T. Smith

The extraction of optic flow cues is fundamental for successful locomotion. During forward motion, the focus of expansion (FoE), in conjunction with knowledge of eye position, indicates the direction in which the individual is heading. Therefore, it is expected that cortical brain regions that are involved in the estimation of heading will be sensitive to this feature. To characterize cortical sensitivity to the location of the FoE or, more generally, the center of flow (CoF) during visually simulated self-motion, we carried out a functional MRI (fMRI) adaptation experiment in several human visual cortical areas that are thought to be sensitive to optic flow parameters, namely, V3A, V6, MT/V5, and MST. In each trial, two optic flow patterns were sequentially presented, with the CoF located in either the same or different positions. With an adaptation design, an area sensitive to heading direction should respond more strongly to a pair of stimuli with different CoFs than to stimuli with the same CoF. Our results show such release from adaptation in areas MT/V5 and MST, and to a lesser extent V3A, suggesting the involvement of these areas in the processing of heading direction. The effect could not be explained either by differences in local motion or by attention capture. It was not observed to a significant extent in area V6 or in control area V1. The different patterns of responses observed in MST and V6, areas that are both involved in the processing of egomotion in macaques and humans, suggest distinct roles in the processing of visual cues for self-motion.


2019 ◽  
Vol 121 (5) ◽  
pp. 1787-1797
Author(s):  
David Souto ◽  
Jayesha Chudasama ◽  
Dirk Kerzel ◽  
Alan Johnston

Smooth pursuit eye movements (pursuit) are used to minimize the retinal motion of moving objects. During pursuit, the pattern of motion on the retina carries not only information about the object movement but also reafferent information about the eye movement itself. The latter arises from the retinal flow of the stationary world in the direction opposite to the eye movement. To extract the global direction of motion of the tracked object and stationary world, the visual system needs to integrate ambiguous local motion measurements (i.e., the aperture problem). Unlike the tracked object, the stationary world’s global motion is entirely determined by the eye movement and thus can be approximately derived from motor commands sent to the eye (i.e., from an efference copy). Because retinal motion opposite to the eye movement is dominant during pursuit, different motion integration mechanisms might be used for retinal motion in the same direction and opposite to pursuit. To investigate motion integration during pursuit, we tested direction discrimination of a brief change in global object motion. The global motion stimulus was a circular array of small static apertures within which one-dimensional gratings moved. We found increased coherence thresholds and a qualitatively different reflexive ocular tracking for global motion opposite to pursuit. Both effects suggest reduced sampling of motion opposite to pursuit, which results in an impaired ability to extract coherence in motion signals in the reafferent direction. We suggest that anisotropic motion integration is an adaptation to asymmetric retinal motion patterns experienced during pursuit eye movements. NEW & NOTEWORTHY This study provides a new understanding of how the visual system achieves coherent perception of an object’s motion while the eyes themselves are moving. The visual system integrates local motion measurements to create a coherent percept of object motion. An analysis of perceptual judgments and reflexive eye movements to a brief change in an object’s global motion confirms that the visual and oculomotor systems pick fewer samples to extract global motion opposite to the eye movement.


Sign in / Sign up

Export Citation Format

Share Document