scholarly journals Effects of Observer's Head-motion Direction on Integration of Motion Parallax and Binocular Disparity for Large Depth

Author(s):  
Yasuaki Tamada ◽  
Masayuki Sato
Perception ◽  
1994 ◽  
Vol 23 (11) ◽  
pp. 1301-1312 ◽  
Author(s):  
John Predebon ◽  
Jacob Steven Woolley

The familiar-size cue to perceived depth was investigated in five experiments. The stimuli were stationary familiar objects viewed monocularly under otherwise completely darkened visual conditions. Perceived depth was measured directly with the method of verbal report and indirectly with the head-motion procedure. Although the familiar-size cue influenced verbal reports of the distances of the objects, it did not determine perceived depth as assessed with the head-motion procedure. These findings support the claim that familiar size is not a major determinant of perceived depth, and that cognitive or nonperceptual factors mediate the effects of familiar size on direct reports of depth and distance. Possible reasons for the failure of familiar size to influence the head-motion-derived measures of perceived depth are discussed with particular emphasis on the role of motion parallax in determining perceptions of depth and relative distance.


2006 ◽  
Vol 46 (17) ◽  
pp. 2636-2644 ◽  
Author(s):  
Mark F. Bradshaw ◽  
Paul B. Hibbard ◽  
Andrew D. Parton ◽  
David Rose ◽  
Keith Langley

2018 ◽  
Author(s):  
Reuben Rideaux ◽  
William J Harrison

ABSTRACTDiscerning objects from their surrounds (i.e., figure-ground segmentation) in a way that guides adaptive behaviours is a fundamental task of the brain. Neurophysiological work has revealed a class of cells in the macaque visual cortex that may be ideally suited to support this neural computation: border-ownership cells (Zhou, Friedman, & von der Heydt, 2000). These orientation-tuned cells appear to respond conditionally to the borders of objects. A behavioural correlate supporting the existence of these cells in humans was demonstrated using two-dimensional luminance defined objects (von der Heydt, Macuda, & Qiu, 2005). However, objects in our natural visual environments are often signalled by complex cues, such as motion and depth order. Thus, for border-ownership systems to effectively support figure-ground segmentation and object depth ordering, they must have access to information from multiple depth cues with strict depth order selectivity. Here we measure in humans (of both sexes) border-ownership-dependent tilt aftereffects after adapting to figures defined by either motion parallax or binocular disparity. We find that both depth cues produce a tilt aftereffect that is selective for figure-ground depth order. Further, we find the effects of adaptation are transferable between cues, suggesting that these systems may combine depth cues to reduce uncertainty (Bülthoff & Mallot, 1988). These results suggest that border-ownership mechanisms have strict depth order selectivity and access to multiple depth cues that are jointly encoded, providing compelling psychophysical support for their role in figure-ground segmentation in natural visual environments.SIGNIFICANCE STATEMENTSegmenting a visual object from its surrounds is a critical function that may be supported by “border-ownership” neural systems that conditionally respond to object borders. Psychophysical work indicates these systems are sensitive to objects defined by luminance contrast. To effectively support figure-ground segmentation, however, neural systems supporting border-ownership must have access to information from multiple depth cues and depth order selectivity. We measured border-ownership-dependent tilt aftereffects to figures defined by either motion parallax or binocular disparity and found aftereffects for both depth cues. These effects were transferable between cues, but selective for figure-ground depth order. Our results suggest that the neural systems supporting figure-ground segmentation have strict depth order selectivity and access to multiple depth cues that are jointly encoded.


2019 ◽  
Author(s):  
Paul Linton

AbstractSince Kepler (1604) and Descartes (1638), ‘vergence’ (the angular rotation of the eyes) has been thought of as one of our most important absolute distance cues. But vergence has never been tested as an absolute distance cue divorced from obvious confounding cues such as binocular disparity. In this article we control for these confounding cues for the first time by gradually manipulating vergence, and find that observers fail to accurately judge distance from vergence. We consider a number of different interpretations of these results, and argue that the most principled response to these results is to question the general effectiveness of vergence as an absolute distance cue. Given other absolute distance cues (such as motion parallax and vertical disparities) are limited in application, this poses a real challenge to our contemporary understanding of visual scale.


Author(s):  
Yuichi Sakano ◽  
Yurina Kitaura ◽  
Kyoko Hasegawa ◽  
Roberto Lopez-Gulliver ◽  
Liang Li ◽  
...  

Transparent visualization is used in many fields because it can visualize not only the frontal object but also other important objects behind it. Although in many situations, it would be very important for the 3D structures of the visualized transparent images to be perceived as they are simulated, little is known quantitatively as to how such transparent 3D structures are perceived. To address this question, in the present study, we conducted a psychophysical experiment in which the observers reported the perceived depth magnitude of a transparent object in medical images, presented with a multi-view 3D display. For the visualization, we employed a stochastic point-based rendering (SPBR) method, which was developed recently as a technique for efficient transparent-rendering. Perceived depth of the transparent object was smaller than the simulated depth. We found, however, that such depth underestimation can be alleviated to some extent by (1) applying luminance gradient inherent in the SPBR method, (2) employing high opacities, and (3) introducing binocular disparity and motion parallax produced by a multi-view 3D display.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 99-99 ◽  
Author(s):  
M F Bradshaw ◽  
B De Bruyn ◽  
R A Eagle ◽  
A D Parton

The use of binocular disparity and motion parallax information was compared in three different psychophysical tasks for which natural viewing and physical stimuli were used. Natural viewing may be an important factor in interpreting experiments which have addressed the ability to use disparity and parallax both separately and in combination (see Frisby et al, 1996 Perception25 129 – 154). The stimuli consisted of configurations of three bright LEDs carefully aligned in the horizontal meridian and presented in darkness. The distance of the middle LED (flashing at 5 Hz) could be adjusted along the midline in accordance with the tasks which included: (i) a depth nulling task, (ii) a depth matching task, and (iii) a shape task—match base/height of triangle. Each task was performed at two viewing distances (1.5 and 3.0 m) and under four different viewing conditions: (i) monocular-static, (ii) monocular-moving, (iii) binocular-static, and (iv) binocular-moving. Note that the different tasks differ in their dependence on viewing distance, and the available cues for viewing distance differ between viewing conditions. Four observers made ten settings in each condition at each distance. Observers, as expected, performed badly (bias and accuracy) in all tasks in the monocular-static condition. Nulling was accurate in the other viewing conditions (no estimate of viewing distance required). Performance was best in the matching task (ratio of viewing distances) but although binocular-static was significantly better than monocular-moving performance in this and in the shape task (absolute distance required), there was no additional improvement in the binocular-moving condition. Results show that observers can recover structure accurately from parallax or disparity information in real-world stimuli.


Sign in / Sign up

Export Citation Format

Share Document