Disparity-tuned channels of the human visual system

1993 ◽  
Vol 10 (4) ◽  
pp. 585-596 ◽  
Author(s):  
Lawrence K. Cormack ◽  
Scott B. Stevenson ◽  
Clifton M. Schor

AbstractTraditionally, it has been thought that the processing of binocular disparity for the perception of stereoscopic depth is accomplished via three types of disparity-selective channels – “near,” “far,” and “tuned.” More recent evidence challenges this notion. We have derived disparity-tuning functions psychophysically using a subthreshold summation (i.e. low-level masking) technique. We measured correlation-detection thresholds for dynamic random-element stereograms containing either one or two surfaces in depth. The resulting disparity-tuning functions show an opponent-type profile, indicating the presence of inhibition between disparity-tuned units in the visual system. Moreover, there is clear inhibition between disparities of the same sign, obviating a strict adherence to near-far opponency. These results compare favorably with tuning functions derived psychophysically using an adaptation technique, and with the tuning profiles from published single-unit recordings. Our results suggests a continuum of overlapping disparity-tuned channels, which is consistent with recent physiological evidence as well as models based on other psychophysical data.

1992 ◽  
Vol 4 (4) ◽  
pp. 573-589 ◽  
Author(s):  
Daniel Kersten ◽  
Heinrich H. Bülthoff ◽  
Bennett L. Schwartz ◽  
Kenneth J. Kurtz

It is well known that the human visual system can reconstruct depth from simple random-dot displays given binocular disparity or motion information. This fact has lent support to the notion that stereo and structure from motion systems rely on low-level primitives derived from image intensities. In contrast, the judgment of surface transparency is often considered to be a higher-level visual process that, in addition to pictorial cues, utilizes stereo and motion information to separate the transparent from the opaque parts. We describe a new illusion and present psychophysical results that question this sequential view by showing that depth from transparency and opacity can override the bias to see rigid motion. The brain's computation of transparency may involve a two-way interaction with the computation of structure from motion.


Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 62-62 ◽  
Author(s):  
A Grodon ◽  
M Fahle

Some features of complex visual displays are analysed effortlessly and in parallel by the human visual system, without requiring scrutiny. Examples for such features are changes of luminance, colour, orientation, and movement. We measured thresholds as well as reaction times for the detection of abrupt spatial changes in luminance in the presence of luminance gradients, in order to evaluate the ability of the system to ignore such gradients. Stimuli were presented on a 20 inch monitor under control of a Silicon Graphics workstation. Luminance was calibrated by means of a photometer (Minolta). We presented between 4 and 14 rectangles simultaneously on a homogeneous dark background. Rectangles were arranged on an incomplete, imaginary circle around the fixation point and luminance changed stepwise from one rectangle to the next. Five observers had to indicate whether all luminance steps between the rectangles were subjectively equal or whether one luminance step was larger. Detection thresholds were determined for the larger step as a function of the small steps (‘base step size’) by means of an adaptive staircase procedure. The smallest luminance steps were detected when the base step size was zero and when only few rectangles were presented. Thresholds increased slightly with the number of rectangles displayed simultaneously, and to a greater extent (by up to a factor of 2) with increasing base step size. The results of all observers improved significantly through practice, by about a factor of 2. We conclude that the visual system is unable to completely eliminate gradients of luminance and to isolate sharp transitions in luminance.


2012 ◽  
Vol 12 (9) ◽  
pp. 39-39
Author(s):  
C. Quaia ◽  
B. Sheliga ◽  
L. Optican ◽  
B. Cumming

Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 230-230
Author(s):  
B Lee

In 1972 Horace Barlow (“Single units and sensation: a neuron doctrine for perceptual psychology?” Perception1 371 – 394) proposed a set of dogmas to guide vision scientists in interpreting neurophysiological data. The 20th anniversary of ECVP is an appropriate occasion to ask if single-unit recordings have really helped us understand the visual system. The answer may be affirmative, but interpreting single-unit data has proved to be much more of a challenge than was anticipated in that early and optimistic era of single-unit recording. I review data from retinal and cortical experiments to illustrate this thesis, and ask if Barlow's dogmas are still relevant to current visual neuroscience.


Perception ◽  
1996 ◽  
Vol 25 (4) ◽  
pp. 381-398 ◽  
Author(s):  
J Farley Norman ◽  
James T Todd

The ability of observers to discriminate depth and orientation differences between separated local regions on object surfaces was examined. The objects were defined by many optical sources of information simultaneously, including shading, texture, motion, and binocular disparity. Despite the full-cue nature of the displays, the observers' performance was relatively poor, with Weber fractions ranging from 10% to 40%. The Weber fractions were considerably lower for discriminations of surface-orientation differences than for similar discriminations of depth differences. The ability of observers to discriminate surface-orientation differences was approximately invariant over the separation of the regions in the projected image. In contrast, the ability to discriminate depth differences was highly influenced by the amount of image separation. This qualitative difference between the perception of depth intervals and surface-orientation differences suggests that knowledge of depths and orientations may be represented separately within the human visual system.


2010 ◽  
Vol 114 (7) ◽  
pp. 758-773 ◽  
Author(s):  
A. Benoit ◽  
A. Caplier ◽  
B. Durette ◽  
J. Herault

2016 ◽  
Vol 283 (1830) ◽  
pp. 20160383 ◽  
Author(s):  
Alexander A. Muryy ◽  
Roland W. Fleming ◽  
Andrew E. Welchman

Visually identifying glossy surfaces can be crucial for survival (e.g. ice patches on a road), yet estimating gloss is computationally challenging for both human and machine vision. Here, we demonstrate that human gloss perception exploits some surprisingly simple binocular fusion signals, which are likely available early in the visual cortex. In particular, we show that the unusual disparity gradients and vertical offsets produced by reflections create distinctive ‘proto-rivalrous’ (barely fusible) image regions that are a critical indicator of gloss. We find that manipulating the gradients and vertical components of binocular disparities yields predictable changes in material appearance. Removing or occluding proto-rivalrous signals makes surfaces look matte, while artificially adding such signals to images makes them appear glossy. This suggests that the human visual system has internalized the idiosyncratic binocular fusion characteristics of glossy surfaces, providing a straightforward means of estimating surface attributes using low-level image signals.


Sign in / Sign up

Export Citation Format

Share Document