Shape from Shading. I: Surface Curvature and Orientation

Perception ◽  
1994 ◽  
Vol 23 (2) ◽  
pp. 169-189 ◽  
Author(s):  
Alan Johnston ◽  
Peter J Passmore

The human visual system makes effective use of shading alone in recovering the shape of objects. Pictures of sculptures are readily interpreted—a situation where shading provides virtually the sole cue to shape. However, shading has been considered a poor cue to depth in comparison with retinal disparity and kinetic cues. Curvature discrimination thresholds were measured with the use of a surface-alignment task for a range of surface curvatures from 0.16 cm−1 to 1.06 cm−1. Weber fractions were around 0.1, demonstrating considerable precision in this task. Weber fractions did not vary substantially as a function of surface curvature. Rotation of the light source around the line of sight had no effect on curvature discrimination but rotation towards the viewer increased discrimination thresholds. In contrast, slant discrimination declined with rotation of the light-source vector towards the viewpoint. When a band-limited random grey-level texture was mapped onto the sphere, curvature discrimination thresholds increased gradually as a function of texture contrast, even though texture and shading provided consistent cues to depth. Adding texture also increased slant discrimination thresholds, demonstrating that texture can act as a source of noise in shape-from-shading tasks. The psychophysical findings have been used to evaluate whether current algorithms for shape from shading in computer vision could serve as models of human three-dimensional shape analysis and to highlight low-level intramodular interactions between depth cues. It is demonstrated that, in the case of surfaces defined by shading, curvature descriptions are primary and do not depend upon the prior encoding of surface orientation, and Koenderink's local-shape index is suggested as an alternative intermediate representation of surface shape in the human visual system.


Perception ◽  
1994 ◽  
Vol 23 (2) ◽  
pp. 191-200 ◽  
Author(s):  
Alan Johnston ◽  
Peter J Passmore

Pattern-acuity tasks have provided valuable information about the precision with which the visual system can make judgments about relative spatial position in two-dimensional images. However, outside the laboratory the visual system is habitually faced with the more difficult task of making positional judgments within a three-dimensional spatial environment. Thus our perceptual systems for representing surface shape also need to support the recovery of the location and disposition of features in a three-dimensional space. An investigation of the precision of three-dimensional position judgments in two spatial-judgment tasks, arc length bisection along geodesics and geodesic alignment, is reported. The spatial-judgment tasks were defined with reference to a sphere rendered by means of ray-casting techniques. The presence of shading and texture cues had no effect on discrimination thresholds in either task. Observers' constant errors were generally less than the just noticeable distance, demonstrating that the observers can perform these positional judgment tasks without substantial bias. It is argued that there is no explicit computation of arc length on the basis of shading and texture information and that surface-orientation information cannot be used as a reference in geodesic-alignment tasks. The results raise questions about the utility of a representation of surface orientation in the human visual system.



2006 ◽  
Vol 73 (10) ◽  
pp. 712 ◽  
Author(s):  
N. N. Krasil'nikov ◽  
E. P. Mironenko ◽  
O. I. Krasil'nikova




2006 ◽  
Vol 23 (3-4) ◽  
pp. 311-321 ◽  
Author(s):  
HUSEYIN BOYACI ◽  
KATJA DOERSCHNER ◽  
JACQUELINE L. SNYDER ◽  
LAURENCE T. MALONEY

Researchers studying surface color perception have typically used stimuli that consist of a small number of matte patches (real or simulated) embedded in a plane perpendicular to the line of sight (a “Mondrian,” Land & McCann, 1971). Reliable estimation of the color of a matte surface is a difficult if not impossible computational problem in such limited scenes (Maloney, 1999). In more realistic, three-dimensional scenes the difficulty of the problem increases, in part, because the effective illumination incident on the surface (the light field) now depends on surface orientation and location. We review recent work in multiple laboratories that examines (1) the degree to which the human visual system discounts the light field in judging matte surface lightness and color and (2) what illuminant cues the visual system uses in estimating the flow of light in a scene.



Perception ◽  
2019 ◽  
Vol 48 (6) ◽  
pp. 500-514
Author(s):  
Yuki Kobayashi ◽  
Kazunori Morikawa

The human visual system can extract information on surface reflectance (lightness) from light intensity; this, however, confounds information on reflectance and illumination. We hypothesized that the visual system, to solve this lightness problem, utilizes the internally held prior assumption that illumination falls from above. Experiment 1 showed that an upward-facing surface is perceived to be darker than a downward-facing surface, proving our hypothesis. Experiment 2 showed the same results in the absence of explicit illumination cues. The effect of the light-from-left prior assumption was not observed in Experiment 3. The upward- and downward-facing surface stimuli in Experiments 1 and 2 showed no difference in a two-dimensional configuration or three-dimensional structure, and the participants’ perceived lightness appeared to be affected by the observers’ prior assumption that illumination is always from above. Other studies have not accounted for this illusory effect, and this study’s finding provides additional insights into the study of lightness perception.



1989 ◽  
Vol 1 (3) ◽  
pp. 324-333 ◽  
Author(s):  
Masud Husain ◽  
Stefan Treue ◽  
Richard A. Andersen

Although it is appreciated that humans can use a number of visual cues to perceive the three-dimensional (3-D) shape of an object, for example, luminance, orientation, binocular disparity, and motion, the exact mechanisms employed are not known (De Yoe and Van Essen 1988). An important approach to understanding the computations performed by the visual system is to develop algorithms (Marr 1982) or neural network models (Lehky and Sejnowski 1988; Siegel 1987) that are capable of computing shape from specific cues in the visual image. In this study we investigated the ability of observers to see the 3-D shape of an object using motion cues, so called structure-from-motion (SFM). We measured human performance in a two-alternative forced choice task using novel dynamic random-dot stimuli with limited point lifetimes. We show that the human visual system integrates motion information spatially and temporally (across several point lifetimes) as part of the process for computing SFM. We conclude that SFM algorithms must include surface interpolation to account for human performance. Our experiments also provide evidence that local velocity information, and not position information derived from discrete views of the image (as proposed by some algorithms), is used to solve the SFM problem by the human visual system.



i-Perception ◽  
10.1068/if646 ◽  
2012 ◽  
Vol 3 (9) ◽  
pp. 646-646
Author(s):  
Jisoo Hong ◽  
Keehoon Hong ◽  
Byoungho Lee


2016 ◽  
Vol 283 (1826) ◽  
pp. 20160062 ◽  
Author(s):  
Sarah Zylinski ◽  
D. Osorio ◽  
Sonke Johnsen

Humans use shading as a cue to three-dimensional form by combining low-level information about light intensity with high-level knowledge about objects and the environment. Here, we examine how cuttlefish Sepia officinalis respond to light and shadow to shade the white square (WS) feature in their body pattern. Cuttlefish display the WS in the presence of pebble-like objects, and they can shade it to render the appearance of surface curvature to a human observer, which might benefit camouflage. Here we test how they colour the WS on visual backgrounds containing two-dimensional circular stimuli, some of which were shaded to suggest surface curvature, whereas others were uniformly coloured or divided into dark and light semicircles. WS shading, measured by lateral asymmetry, was greatest when the animal rested on a background of shaded circles and three-dimensional hemispheres, and less on plain white circles or black/white semicircles. In addition, shading was enhanced when light fell from the lighter side of the shaded stimulus, as expected for real convex surfaces. Thus, the cuttlefish acts as if it perceives surface curvature from shading, and takes account of the direction of illumination. However, the direction of WS shading is insensitive to the directions of background shading and illumination; instead the cuttlefish tend to turn to face the light source.



2020 ◽  
pp. bmjmilitary-2020-001493
Author(s):  
Bonnie Noeleen Posselt ◽  
M Winterbottom

Visual standards for military aviators were historically set in the 1920s with requirements based on the visual systems of aircraft at the time, and these standards have changed very little despite significant advances in aircraft technology. Helmet-mounted displays (HMDs) today enable pilots to keep their head out of the cockpit while flying and can be monocular, biocular or binocular in design. With next generation binocular HMDs, flight data can be displayed in three-dimensional stereo to declutter information presented, improving search times and potentially improve overall performance further. However, these new visually demanding technologies place previously unconsidered stresses on the human visual system. As such, new medical vision standards may be required for military aircrew along with improved testing methods to accurately characterise stereo acuity.



2014 ◽  
Vol 886 ◽  
pp. 374-377
Author(s):  
Yan Wu ◽  
Qi Li

Considerable improvements in display technology were made in stereoscopic imaging and image quality rose with technical progress. But there was not enough effort on reducing visual fatigue. The study was to investigate one of the ways to reduce visual fatigue caused by three-dimensional images. Static random-dot stereograms (RDS) were used as stimuli. The performance of every subject was recorded with disparate disparities of 3.27', 6.54', 8.18', 11.45', 14.72', 17.99', 21.26', and 24.53'. Results showed that reaction times were always longer in the uncrossed disparities relative to the crossed disparities. For crossed disparities, human visual system was the most sensitive to the images with disparity of 6.54'. As to uncrossed disparities, human visual system was the most sensitive to the images with disparity of 8.18'.



Sign in / Sign up

Export Citation Format

Share Document