Binocular Disparity and Head-Up Displays

Author(s):  
Christopher P. Gibson

Collimation errors present in displays such as the head-up display (HUD) will produce retinal disparity on the retinae of the observer and will have the effect of altering the spatial location of the display. It is apparent that this can, in some instances, give rise to visual discomfort. Psychophysical methods were used to examine the sensitivity and the tolerances of the visual system to binocular disparity in HUDs. It was shown that, when left to their own devices, subjects preferred a small positive disparity to exist between the HUD and the outside world and that even small amounts of negative disparity can have a disturbing perceptual effect. The effect is discussed in relation to the contradictory depth cues which can exist in this kind of electro-optical display.

2021 ◽  
Author(s):  
HyungGoo Kim ◽  
Dora Angelaki ◽  
Gregory DeAngelis

Detecting objects that move in a scene is a fundamental computation performed by the visual system. This computation is greatly complicated by observer motion, which causes most objects to move across the retinal image. How the visual system detects scene-relative object motion during self-motion is poorly understood. Human behavioral studies suggest that the visual system may identify local conflicts between motion parallax and binocular disparity cues to depth, and may use these signals to detect moving objects. We describe a novel mechanism for performing this computation based on neurons in macaque area MT with incongruent depth tuning for binocular disparity and motion parallax cues. Neurons with incongruent tuning respond selectively to scene-relative object motion and their responses are predictive of perceptual decisions when animals are trained to detect a moving object during selfmotion. This finding establishes a novel functional role for neurons with incongruent tuning for multiple depth cues.


Perception ◽  
10.1068/p3342 ◽  
2003 ◽  
Vol 32 (2) ◽  
pp. 131-153 ◽  
Author(s):  
Makoto Ichikawa ◽  
Takahiko Kimura ◽  
Hiroyuki Egusa ◽  
Makiko Nakatsuka ◽  
Jun Amano ◽  
...  

For 35 to 39 days, four observers wore continuously left–right reversing spectacles which pseudoscopically reverse the order of binocular disparity and direction of convergence. In three tests, we investigated how the visual system copes with the transformation of depth and distance information due to the reversing spectacles. In stereogram observation, after a few days of wearing the spectacles, the observers sometimes perceived a depth order which was opposite to the depth order that they had perceived in the pre-spectacle-wearing period. Monocular depth cues contributed more to depth perception in the spectacle-wearing period than they did in the pre-spectacle-wearing period. While the perceived distance significantly decreased during the spectacle-wearing period, we found no evidence of adaptive change in distance perception. The results indicate that the visual system adapts itself to the transformed situation by not only changing the processing of disparity but also by changing the relative efficiency of each cue in determining apparent depth.


Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 115-115
Author(s):  
K Okajima ◽  
M Takase ◽  
S Takahashi

Two colours can be perceived at one location on overlapping planes only when the front plane is transparent. This phenomenon suggests that colour information processing is not independent of depth information processing and vice versa. To investigate the interaction between colour and depth channels, we used colour stimuli and binocular parallax to identify the conditions for transparency. Each stimulus, presented on a CRT to one eye, consisted of a centre patch and a surround. Binocular disparity was set so that the centre patch could be seen behind the surround. However, the surround appears to be behind the centre patch when the surround is perceived as an opaque plane. We examined several combinations of basic colours for the centre patch and surround. The surround luminance was constant at 1.0 cd m−2 and the luminance of the centre was varied. Subjects used the apparent depth of the surround to report whether or not transparency occurred. The results show two types of transparency: ‘bright-centre transparency’ and ‘dark-centre transparency’. We found that the range of centre luminances which yield transparency depends on the combination of centre and surround colours, ie influences of brightness and colour opponency were found. We conclude that there is interaction between colour and depth channels in the visual system.


1992 ◽  
Vol 4 (4) ◽  
pp. 573-589 ◽  
Author(s):  
Daniel Kersten ◽  
Heinrich H. Bülthoff ◽  
Bennett L. Schwartz ◽  
Kenneth J. Kurtz

It is well known that the human visual system can reconstruct depth from simple random-dot displays given binocular disparity or motion information. This fact has lent support to the notion that stereo and structure from motion systems rely on low-level primitives derived from image intensities. In contrast, the judgment of surface transparency is often considered to be a higher-level visual process that, in addition to pictorial cues, utilizes stereo and motion information to separate the transparent from the opaque parts. We describe a new illusion and present psychophysical results that question this sequential view by showing that depth from transparency and opacity can override the bias to see rigid motion. The brain's computation of transparency may involve a two-way interaction with the computation of structure from motion.


2005 ◽  
Vol 93 (1) ◽  
pp. 620-626 ◽  
Author(s):  
Jay Hegdé ◽  
David C. Van Essen

Disparity tuning in visual cortex has been shown using a variety of stimulus types that contain stereoscopic depth cues. It is not known whether different stimuli yield similar disparity tuning curves. We studied whether cells in visual area V4 of the macaque show similar disparity tuning profiles when the same set of disparity values were tested using bars or dynamic random dot stereograms, which are among the most commonly used stimuli for this purpose. In a majority of V4 cells (61%), the shape of the disparity tuning profile differed significantly for the two stimulus types. The two sets of stimuli yielded statistically indistinguishable disparity tuning profiles for only a small minority (6%) of V4 cells. These results indicate that disparity tuning in V4 is stimulus-dependent. Given the fact that bar stimuli contain two-dimensional (2-D) shape cues, and the random dot stereograms do not, our results also indicate that V4 cells represent 2-D shape and binocular disparity in an interdependent fashion, revealing an unexpected complexity in the analysis of depth and three-dimensional shape.


2007 ◽  
Vol 24 (2) ◽  
pp. 207-215 ◽  
Author(s):  
YING ZHANG ◽  
VERONICA S. WEINER ◽  
WARREN M. SLOCUM ◽  
PETER H. SCHILLER

A stimulus display was devised that enabled us to examine how effectively monkeys and humans can process shading and disparity cues for depth perception. The display allowed us to present these cues separately, in concert and in conflict with each other. An oddities discrimination task was used. Humans as well as monkeys were able to utilize both shading and disparity cues but shading cues were more effectively processed by humans. Humans and monkeys performed better and faster when the two cues were presented conjointly rather than singly. Performance was significantly degraded when the two cues were presented in conflict with each other suggesting that these cues are processed interactively at higher levels in the visual system. The fact that monkeys can effectively utilize depth information derived from shading and disparity indicates that they are a good animal model for the study of the neural mechanisms that underlie the processing of these two depth cues.


2018 ◽  
Author(s):  
Reuben Rideaux ◽  
William J Harrison

ABSTRACTDiscerning objects from their surrounds (i.e., figure-ground segmentation) in a way that guides adaptive behaviours is a fundamental task of the brain. Neurophysiological work has revealed a class of cells in the macaque visual cortex that may be ideally suited to support this neural computation: border-ownership cells (Zhou, Friedman, & von der Heydt, 2000). These orientation-tuned cells appear to respond conditionally to the borders of objects. A behavioural correlate supporting the existence of these cells in humans was demonstrated using two-dimensional luminance defined objects (von der Heydt, Macuda, & Qiu, 2005). However, objects in our natural visual environments are often signalled by complex cues, such as motion and depth order. Thus, for border-ownership systems to effectively support figure-ground segmentation and object depth ordering, they must have access to information from multiple depth cues with strict depth order selectivity. Here we measure in humans (of both sexes) border-ownership-dependent tilt aftereffects after adapting to figures defined by either motion parallax or binocular disparity. We find that both depth cues produce a tilt aftereffect that is selective for figure-ground depth order. Further, we find the effects of adaptation are transferable between cues, suggesting that these systems may combine depth cues to reduce uncertainty (Bülthoff & Mallot, 1988). These results suggest that border-ownership mechanisms have strict depth order selectivity and access to multiple depth cues that are jointly encoded, providing compelling psychophysical support for their role in figure-ground segmentation in natural visual environments.SIGNIFICANCE STATEMENTSegmenting a visual object from its surrounds is a critical function that may be supported by “border-ownership” neural systems that conditionally respond to object borders. Psychophysical work indicates these systems are sensitive to objects defined by luminance contrast. To effectively support figure-ground segmentation, however, neural systems supporting border-ownership must have access to information from multiple depth cues and depth order selectivity. We measured border-ownership-dependent tilt aftereffects to figures defined by either motion parallax or binocular disparity and found aftereffects for both depth cues. These effects were transferable between cues, but selective for figure-ground depth order. Our results suggest that the neural systems supporting figure-ground segmentation have strict depth order selectivity and access to multiple depth cues that are jointly encoded.


2019 ◽  
Author(s):  
Guido Maiello ◽  
Manuela Chessa ◽  
Peter J. Bex ◽  
Fabio Solari

AbstractThe human visual system is foveated: we can see fine spatial details in central vision, whereas resolution is poor in our peripheral visual field, and this loss of resolution follows an approximately logarithmic decrease. Additionally, our brain organizes visual input in polar coordinates. Therefore, the image projection occurring between retina and primary visual cortex can be mathematically described by the log-polar transform. Here, we test and model how this space-variant visual processing affects how we process binocular disparity, a key component of human depth perception. We observe that the fovea preferentially processes disparities at fine spatial scales, whereas the visual periphery is tuned for coarse spatial scales, in line with the naturally occurring distributions of depths and disparities in the real-world. We further show that the visual field integrates disparity information across the visual field, in a near-optimal fashion. We develop a foveated, log-polar model that mimics the processing of depth information in primary visual cortex and that can process disparity directly in the cortical domain representation. This model takes real images as input and recreates the observed topography of disparity sensitivity in man. Our findings support the notion that our foveated, binocular visual system has been moulded by the statistics of our visual environment.Author summaryWe investigate how humans perceive depth from binocular disparity at different spatial scales and across different regions of the visual field. We show that small changes in disparity-defined depth are detected best in central vision, whereas peripheral vision best captures the coarser structure of the environment. We also demonstrate that depth information extracted from different regions of the visual field is combined into a unified depth percept. We then construct an image-computable model of disparity processing that takes into account how our brain organizes the visual input at our retinae. The model operates directly in cortical image space, and neatly accounts for human depth perception across the visual field.


2012 ◽  
Vol 12 (9) ◽  
pp. 39-39
Author(s):  
C. Quaia ◽  
B. Sheliga ◽  
L. Optican ◽  
B. Cumming

Sign in / Sign up

Export Citation Format

Share Document