scholarly journals Surface color perception in three-dimensional scenes

2006 ◽  
Vol 23 (3-4) ◽  
pp. 311-321 ◽  
Author(s):  
HUSEYIN BOYACI ◽  
KATJA DOERSCHNER ◽  
JACQUELINE L. SNYDER ◽  
LAURENCE T. MALONEY

Researchers studying surface color perception have typically used stimuli that consist of a small number of matte patches (real or simulated) embedded in a plane perpendicular to the line of sight (a “Mondrian,” Land & McCann, 1971). Reliable estimation of the color of a matte surface is a difficult if not impossible computational problem in such limited scenes (Maloney, 1999). In more realistic, three-dimensional scenes the difficulty of the problem increases, in part, because the effective illumination incident on the surface (the light field) now depends on surface orientation and location. We review recent work in multiple laboratories that examines (1) the degree to which the human visual system discounts the light field in judging matte surface lightness and color and (2) what illuminant cues the visual system uses in estimating the flow of light in a scene.

Perception ◽  
2019 ◽  
Vol 48 (6) ◽  
pp. 500-514
Author(s):  
Yuki Kobayashi ◽  
Kazunori Morikawa

The human visual system can extract information on surface reflectance (lightness) from light intensity; this, however, confounds information on reflectance and illumination. We hypothesized that the visual system, to solve this lightness problem, utilizes the internally held prior assumption that illumination falls from above. Experiment 1 showed that an upward-facing surface is perceived to be darker than a downward-facing surface, proving our hypothesis. Experiment 2 showed the same results in the absence of explicit illumination cues. The effect of the light-from-left prior assumption was not observed in Experiment 3. The upward- and downward-facing surface stimuli in Experiments 1 and 2 showed no difference in a two-dimensional configuration or three-dimensional structure, and the participants’ perceived lightness appeared to be affected by the observers’ prior assumption that illumination is always from above. Other studies have not accounted for this illusory effect, and this study’s finding provides additional insights into the study of lightness perception.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Xiaohua Feng ◽  
Liang Gao

AbstractCameras with extreme speeds are enabling technologies in both fundamental and applied sciences. However, existing ultrafast cameras are incapable of coping with extended three-dimensional scenes and fall short for non-line-of-sight imaging, which requires a long sequence of time-resolved two-dimensional data. Current non-line-of-sight imagers, therefore, need to perform extensive scanning in the spatial and/or temporal dimension, restricting their use in imaging only static or slowly moving objects. To address these long-standing challenges, we present here ultrafast light field tomography (LIFT), a transient imaging strategy that offers a temporal sequence of over 1000 and enables highly efficient light field acquisition, allowing snapshot acquisition of the complete four-dimensional space and time. With LIFT, we demonstrated three-dimensional imaging of light in flight phenomena with a <10 picoseconds resolution and non-line-of-sight imaging at a 30 Hz video-rate. Furthermore, we showed how LIFT can benefit from deep learning for an improved and accelerated image formation. LIFT may facilitate broad adoption of time-resolved methods in various disciplines.


1989 ◽  
Vol 1 (3) ◽  
pp. 324-333 ◽  
Author(s):  
Masud Husain ◽  
Stefan Treue ◽  
Richard A. Andersen

Although it is appreciated that humans can use a number of visual cues to perceive the three-dimensional (3-D) shape of an object, for example, luminance, orientation, binocular disparity, and motion, the exact mechanisms employed are not known (De Yoe and Van Essen 1988). An important approach to understanding the computations performed by the visual system is to develop algorithms (Marr 1982) or neural network models (Lehky and Sejnowski 1988; Siegel 1987) that are capable of computing shape from specific cues in the visual image. In this study we investigated the ability of observers to see the 3-D shape of an object using motion cues, so called structure-from-motion (SFM). We measured human performance in a two-alternative forced choice task using novel dynamic random-dot stimuli with limited point lifetimes. We show that the human visual system integrates motion information spatially and temporally (across several point lifetimes) as part of the process for computing SFM. We conclude that SFM algorithms must include surface interpolation to account for human performance. Our experiments also provide evidence that local velocity information, and not position information derived from discrete views of the image (as proposed by some algorithms), is used to solve the SFM problem by the human visual system.


2003 ◽  
Vol 26 (1) ◽  
pp. 38-39
Author(s):  
Laurence T. Maloney

AbstractByrne & Hilbert propose that color can be identified with explicit properties of physical surfaces. I argue that this claim must be qualified to take into account constraints needed to make recovery of surface color information possible. When these constraints are satisfied, then a biological visual system can establish a correspondence between perceived surface color and specific surface properties.


2011 ◽  
pp. 280-307 ◽  
Author(s):  
Laurence T. Maloney ◽  
Holly E. Gerhard ◽  
Huseyin Boyaci ◽  
Katja Doerschner

2020 ◽  
pp. bmjmilitary-2020-001493
Author(s):  
Bonnie Noeleen Posselt ◽  
M Winterbottom

Visual standards for military aviators were historically set in the 1920s with requirements based on the visual systems of aircraft at the time, and these standards have changed very little despite significant advances in aircraft technology. Helmet-mounted displays (HMDs) today enable pilots to keep their head out of the cockpit while flying and can be monocular, biocular or binocular in design. With next generation binocular HMDs, flight data can be displayed in three-dimensional stereo to declutter information presented, improving search times and potentially improve overall performance further. However, these new visually demanding technologies place previously unconsidered stresses on the human visual system. As such, new medical vision standards may be required for military aircrew along with improved testing methods to accurately characterise stereo acuity.


2014 ◽  
Vol 886 ◽  
pp. 374-377
Author(s):  
Yan Wu ◽  
Qi Li

Considerable improvements in display technology were made in stereoscopic imaging and image quality rose with technical progress. But there was not enough effort on reducing visual fatigue. The study was to investigate one of the ways to reduce visual fatigue caused by three-dimensional images. Static random-dot stereograms (RDS) were used as stimuli. The performance of every subject was recorded with disparate disparities of 3.27', 6.54', 8.18', 11.45', 14.72', 17.99', 21.26', and 24.53'. Results showed that reaction times were always longer in the uncrossed disparities relative to the crossed disparities. For crossed disparities, human visual system was the most sensitive to the images with disparity of 6.54'. As to uncrossed disparities, human visual system was the most sensitive to the images with disparity of 8.18'.


Sign in / Sign up

Export Citation Format

Share Document