scholarly journals Two-frame frequency-based estimation of local motion parallax direction in 3D cluttered scenes

Author(s):  
V. Couture ◽  
M. S. Langer ◽  
A. Caine ◽  
R. Mann

Perception ◽  
1992 ◽  
Vol 21 (6) ◽  
pp. 813-823 ◽  
Author(s):  
Margo Eyeson-Annan ◽  
Brian Brown

The importance in mobility performance of the rate of presentation of visual information, binocular versus monocular vision, the use of multiple rather than single reference points, and local motion parallax was investigated in two experiments. In each experiment ten subjects walked a triangular mobility course in a totally darkened room; the only visible targets were light emitting diodes (LEDs), mounted on poles, at the apices of the triangle. The LEDs were mounted so that one or two could be used in a trial; if two were used the distance between them was varied horizontally (in experiment 1) and vertically (in experiment 2). The subjects walked around the course under a range of conditions, including two ‘optimal trials’ in full light. The LEDs were flashed for 1 ms at frequencies of 0.5, 1 and 5 Hz in experiment 1 and at 1 and 5 Hz in experiment 2. Mobility was measured with the use of an ultrasonic locator system which measured the subject's position on the course 10 times per second. The mean velocity of the subject in traversing the course was significantly reduced when the flash rate was slower, when the subject had one eye occluded, or when there was only one LED on the pole; when the spacing between the LEDs was varied, either vertically or horizontally performance was unaffected. These results imply that the frequency of updating of visual information is important in determining mobility performance, as are binocular cues, but that local motion parallax is not important. The number of LEDs on each pole had a significant effect on mobility performance: an ‘object’ (two lights) gave more information than a point reference.



eLife ◽  
2015 ◽  
Vol 4 ◽  
Author(s):  
Adhira Sunkara ◽  
Gregory C DeAngelis ◽  
Dora E Angelaki

As we navigate through the world, eye and head movements add rotational velocity patterns to the retinal image. When such rotations accompany observer translation, the rotational velocity patterns must be discounted to accurately perceive heading. The conventional view holds that this computation requires efference copies of self-generated eye/head movements. Here we demonstrate that the brain implements an alternative solution in which retinal velocity patterns are themselves used to dissociate translations from rotations. These results reveal a novel role for visual cues in achieving a rotation-invariant representation of heading in the macaque ventral intraparietal area. Specifically, we show that the visual system utilizes both local motion parallax cues and global perspective distortions to estimate heading in the presence of rotations. These findings further suggest that the brain is capable of performing complex computations to infer eye movements and discount their sensory consequences based solely on visual cues.



Author(s):  
Fan Guo ◽  
◽  
Jin Tang ◽  
Beiji Zou ◽  

Recent advances in 3D have increased the importance of stereoscopic content creation and processing. Therefore, converting existing 2D videos into 3D videos is very important for growing 3D market. The most difficult task in 2D-to-3D video conversion is estimating depth map from single-view frame images. Thus, in this paper, we propose a novel motion-based 2D to 3D video conversion method. The method first determines the motion type using the optical flow estimation. Then, different depth estimation processes are performed based on the motion type. For global motion, the depth from motion parallax provides the final depth map. For local motion, the depth from template together with the bilateral filter is used to produce the depth map. Finally, the left- and right-view images are synthesized to generate realistic stereoscopic results for viewers. During the process, the visual artifacts of the synthesized virtual views are effectively eliminated by recovering the separation and loss of foreground objects. A comparative study and quantitative evaluation with other conversion methods are carried out, which demonstrate that better overall quality results may be obtained using the proposed method.



2016 ◽  
Author(s):  
Marco Munderloh

The detection of moving objects in aerial video sequences is a common application in safety and environmental monitoring. The challenge is the non-static camera, which is moving together with an aerial vehicle. To detect local changes due to movement of ground objects in such a scenario, the displacements of image pixels resulting from the motion of the camera need to be compensated. The most common method is to use a projective transformation and assume the observed scene to be planar. However, this is only valid for very high altitudes. It fails otherwise and results in falsely detected local motion. This work addresses the problem in two ways. After analyzing the error resulting from motion parallax, two detectors for moving objects in non-­planar scenes are presented. One is based on a motion parallax model and one on a smooth optical flow approach. Following this, a motion compensation method for non-planar scenes is presented, allowing the use of image differences based methods ...





2013 ◽  
Vol 24 (3) ◽  
pp. 175 ◽  
Author(s):  
Qian WANG ◽  
Jimin LIANG ◽  
Zejun HU


2020 ◽  
Vol 38 (5) ◽  
pp. 395-405
Author(s):  
Luca Battaglini ◽  
Federica Mena ◽  
Clara Casco

Background: To study motion perception, a stimulus consisting of a field of small, moving dots is often used. Generally, some of the dots coherently move in the same direction (signal) while the rest move randomly (noise). A percept of global coherent motion (CM) results when many different local motion signals are combined. CM computation is a complex process that requires the integrity of the middle-temporal area (MT/V5) and there is evidence that increasing the number of dots presented in the stimulus makes such computation more efficient. Objective: In this study, we explored whether anodal direct current stimulation (tDCS) over MT/V5 would increase individual performance in a CM task at a low signal-to-noise ratio (SNR, i.e. low percentage of coherent dots) and with a target consisting of a large number of moving dots (high dot numerosity, e.g. >250 dots) with respect to low dot numerosity (<60 dots), indicating that tDCS favour the integration of local motion signal into a single global percept (global motion). Method: Participants were asked to perform a CM detection task (two-interval forced-choice, 2IFC) while they received anodal, cathodal, or sham stimulation on three different days. Results: Our findings showed no effect of cathodal tDCS with respect to the sham condition. Instead, anodal tDCS improves performance, but mostly when dot numerosity is high (>400 dots) to promote efficient global motion processing. Conclusions: The present study suggests that tDCS may be used under appropriate stimulus conditions (low SNR and high dot numerosity) to boost the global motion processing efficiency, and may be useful to empower clinical protocols to treat visual deficits.







Author(s):  
Wangwang Zhu ◽  
Xi Zhang ◽  
Baixuan Zhao ◽  
Shiwei Peng ◽  
Pengfei Guo ◽  
...  


Sign in / Sign up

Export Citation Format

Share Document