Analysis of an autostereoscopic display: the perceptual range of the three-dimensional visual fields and saliency of static depth cues

Author(s):  
Paul Havig ◽  
John McIntire ◽  
Rhoshonda McGruder
2020 ◽  
Vol 3 (1) ◽  
pp. 10501-1-10501-9
Author(s):  
Christopher W. Tyler

Abstract For the visual world in which we operate, the core issue is to conceptualize how its three-dimensional structure is encoded through the neural computation of multiple depth cues and their integration to a unitary depth structure. One approach to this issue is the full Bayesian model of scene understanding, but this is shown to require selection from the implausibly large number of possible scenes. An alternative approach is to propagate the implied depth structure solution for the scene through the “belief propagation” algorithm on general probability distributions. However, a more efficient model of local slant propagation is developed as an alternative.The overall depth percept must be derived from the combination of all available depth cues, but a simple linear summation rule across, say, a dozen different depth cues, would massively overestimate the perceived depth in the scene in cases where each cue alone provides a close-to-veridical depth estimate. On the other hand, a Bayesian averaging or “modified weak fusion” model for depth cue combination does not provide for the observed enhancement of perceived depth from weak depth cues. Thus, the current models do not account for the empirical properties of perceived depth from multiple depth cues.The present analysis shows that these problems can be addressed by an asymptotic, or hyperbolic Minkowski, approach to cue combination. With appropriate parameters, this first-order rule gives strong summation for a few depth cues, but the effect of an increasing number of cues beyond that remains too weak to account for the available degree of perceived depth magnitude. Finally, an accelerated asymptotic rule is proposed to match the empirical strength of perceived depth as measured, with appropriate behavior for any number of depth cues.


2020 ◽  
Vol 6 (2) ◽  
pp. eaay6036 ◽  
Author(s):  
R. C. Feord ◽  
M. E. Sumner ◽  
S. Pusdekar ◽  
L. Kalra ◽  
P. T. Gonzalez-Bellido ◽  
...  

The camera-type eyes of vertebrates and cephalopods exhibit remarkable convergence, but it is currently unknown whether the mechanisms for visual information processing in these brains, which exhibit wildly disparate architecture, are also shared. To investigate stereopsis in a cephalopod species, we affixed “anaglyph” glasses to cuttlefish and used a three-dimensional perception paradigm. We show that (i) cuttlefish have also evolved stereopsis (i.e., the ability to extract depth information from the disparity between left and right visual fields); (ii) when stereopsis information is intact, the time and distance covered before striking at a target are shorter; (iii) stereopsis in cuttlefish works differently to vertebrates, as cuttlefish can extract stereopsis cues from anticorrelated stimuli. These findings demonstrate that although there is convergent evolution in depth computation, cuttlefish stereopsis is likely afforded by a different algorithm than in humans, and not just a different implementation.


2021 ◽  
Vol 17 (1) ◽  
pp. 20200478
Author(s):  
Job Aben ◽  
Johannes Signer ◽  
Janne Heiskanen ◽  
Petri Pellikka ◽  
Justin M. J. Travis

Animal spatial behaviour is often presumed to reflect responses to visual cues. However, inference of behaviour in relation to the environment is challenged by the lack of objective methods to identify the information that effectively is available to an animal from a given location. In general, animals are assumed to have unconstrained information on the environment within a detection circle of a certain radius (the perceptual range; PR). However, visual cues are only available up to the first physical obstruction within an animal's PR, making information availability a function of an animal's location within the physical environment (the effective visual perceptual range; EVPR). By using LiDAR data and viewshed analysis, we modelled forest birds' EVPRs at each step along a movement path. We found that the EVPR was on average 0.063% that of an unconstrained PR and, by applying a step-selection analysis, that individuals are 1.55 times more likely to move to a tree within their EVPR than to an equivalent tree outside it. This demonstrates that behavioural choices can be substantially impacted by the characteristics of an individual's EVPR and highlights that inferences made from movement data may be improved by accounting for the EVPR.


2005 ◽  
Vol 93 (1) ◽  
pp. 620-626 ◽  
Author(s):  
Jay Hegdé ◽  
David C. Van Essen

Disparity tuning in visual cortex has been shown using a variety of stimulus types that contain stereoscopic depth cues. It is not known whether different stimuli yield similar disparity tuning curves. We studied whether cells in visual area V4 of the macaque show similar disparity tuning profiles when the same set of disparity values were tested using bars or dynamic random dot stereograms, which are among the most commonly used stimuli for this purpose. In a majority of V4 cells (61%), the shape of the disparity tuning profile differed significantly for the two stimulus types. The two sets of stimuli yielded statistically indistinguishable disparity tuning profiles for only a small minority (6%) of V4 cells. These results indicate that disparity tuning in V4 is stimulus-dependent. Given the fact that bar stimuli contain two-dimensional (2-D) shape cues, and the random dot stereograms do not, our results also indicate that V4 cells represent 2-D shape and binocular disparity in an interdependent fashion, revealing an unexpected complexity in the analysis of depth and three-dimensional shape.


1994 ◽  
Vol 19 (12) ◽  
pp. 901 ◽  
Author(s):  
G. P. Nordin ◽  
J. H. Kulick ◽  
M. Jones ◽  
P. Nasiatka ◽  
R. G. Lindquist ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document