scholarly journals Effect of Color Difference between Stereo Pair Images on Depth Perception from Binocular Disparity

Author(s):  
Masahiro Ishii ◽  
Mika Hoshiyama
2008 ◽  
Vol 25 (3) ◽  
pp. 361-364 ◽  
Author(s):  
SANG WOOK HONG ◽  
STEVEN K. SHEVELL

An open question in color rivalry is whether alternation between two colors is caused by a difference in receptoral stimulation or a difference in the neural representation of color appearance. This question was examined with binocular rivalry between physically identical lights that differed in appearance due to chromatic induction. Perceptual alternation was measured between gratings of the same chromaticity; each one was presented within a different patterned surround that caused the gratings, one to each eye, to appear unequal in hue because of chromatic induction. The gratings were presented dichoptically with binocular disparity so the rivalrous gratings appeared in front of the surround. Perceptual alternation in hue was found for the two physically identical chromaticities. Stereoscopic depth also was perceived, corroborating binocular neural combination despite color rivalry (Treisman, 1962). The results show that color rivalry is resolved after color-appearance shifts caused by chromatic context, and that color rivalry does not require competing unequal cone excitations from the rivalrous stimuli.


2000 ◽  
Vol 44 (21) ◽  
pp. 3-500-3-500
Author(s):  
Jing-Long Wu ◽  
Kazuyoshi Tsukamoto

Human interactive characteristic between the binocular disparity and the occlusion for depth perception is measured with using random-dot stimulus. The experimental results suggested that if the binocular disparity is set at a proper value, the depth information is mainly obtained from the cue of the binocular disparity, and if the occlusion ratio is larger than some constant value the depth information is obtained from the cue of the occlusion. Based on the experimental results, we can find a method to make images with depth information in the Head Mounted Display (HMD) when the cues of the binocular disparity and the occlusion are concurrently used.


2019 ◽  
Author(s):  
Guido Maiello ◽  
Manuela Chessa ◽  
Peter J. Bex ◽  
Fabio Solari

AbstractThe human visual system is foveated: we can see fine spatial details in central vision, whereas resolution is poor in our peripheral visual field, and this loss of resolution follows an approximately logarithmic decrease. Additionally, our brain organizes visual input in polar coordinates. Therefore, the image projection occurring between retina and primary visual cortex can be mathematically described by the log-polar transform. Here, we test and model how this space-variant visual processing affects how we process binocular disparity, a key component of human depth perception. We observe that the fovea preferentially processes disparities at fine spatial scales, whereas the visual periphery is tuned for coarse spatial scales, in line with the naturally occurring distributions of depths and disparities in the real-world. We further show that the visual field integrates disparity information across the visual field, in a near-optimal fashion. We develop a foveated, log-polar model that mimics the processing of depth information in primary visual cortex and that can process disparity directly in the cortical domain representation. This model takes real images as input and recreates the observed topography of disparity sensitivity in man. Our findings support the notion that our foveated, binocular visual system has been moulded by the statistics of our visual environment.Author summaryWe investigate how humans perceive depth from binocular disparity at different spatial scales and across different regions of the visual field. We show that small changes in disparity-defined depth are detected best in central vision, whereas peripheral vision best captures the coarser structure of the environment. We also demonstrate that depth information extracted from different regions of the visual field is combined into a unified depth percept. We then construct an image-computable model of disparity processing that takes into account how our brain organizes the visual input at our retinae. The model operates directly in cortical image space, and neatly accounts for human depth perception across the visual field.


2004 ◽  
Vol 21 (3) ◽  
pp. 373-376 ◽  
Author(s):  
STEVEN K. SHEVELL ◽  
DINGCAI CAO

Chromatic assimilation is a shift toward the color of nearby light. Several studies conclude that a neural process contributes to assimilation but the neural locus remains in question. Some studies posit a peripheral process, such as retinal receptive-field organization, while others claim the neural mechanism follows depth perception, figure/ground segregation, or perceptual grouping. The experiments here tested whether assimilation depends on a neural process that follows stereoscopic depth perception. By introducing binocular disparity, the test field judged in color was made to appear in a different depth plane than the light that induced assimilation. The chromaticity and spatial frequency of the inducing light, and the chromaticity of the test light, were varied. Chromatic assimilation was found with all inducing-light sizes and chromaticities, but the magnitude of assimilation did not depend on the perceived relative depth planes of the test and inducing fields. We found no evidence to support the view that chromatic assimilation depends on a neural process that follows binocular combination of the two eyes' signals.


1996 ◽  
Vol 58 (2) ◽  
pp. 271-282 ◽  
Author(s):  
Makoto Ichikawa ◽  
Shinya Saida

Perception ◽  
1993 ◽  
Vol 22 (8) ◽  
pp. 971-984 ◽  
Author(s):  
Makoto Ichikawa ◽  
Hiroyuki Egusa

The plasticity of binocular depth perception was investigated. Six subjects wore left-right reversing spectacles continuously for 10 or 11 days. On looking through the spectacles, the relation between the direction of physical depth (convex or concave) and the direction of binocular disparity (crossed or uncrossed) was reversed, but other depth cues did not change. When subjects observed stereograms through a haploscope and were asked to judge the direction of perceived depth, the directional relation between perceived depth and disparity was reversed both in the two line-contoured stereograms and in the random-dot stereogram in the middle of the wearing period, but the normal relation often returned late in the wearing period. When subjects observed two objects while wearing the spectacles and were asked which appeared the nearer, veridical depth perception increased as the wearing-time passed. These results indicate that the visual transformation reversing the direction of binocular disparity causes changes both in binocular stereopsis and in processes integrating different depth cues.


Author(s):  
K. Zhou ◽  
B. Gorte ◽  
R. Lindenbergh ◽  
E. Widyaningrum

Change detection is an essential step to locate the area where an old model should be updated. With high density and accuracy, LiDAR data is often used to create a 3D city model. However, updating LiDAR data at state or nation level often takes years. Very high resolution (VHR) images with high updating rate is therefore an option for change detection. This paper provides a novel and efficient approach to derive pixel-based building change detection between past LiDAR and new VHR images. The proposed approach aims notably at reducing false alarms of changes near edges. For this purpose, LiDAR data is used to supervise the process of finding stereo pairs and derive the changes directly. This paper proposes to derive three possible heights (so three DSMs) by exploiting planar segments from LiDAR data. Near edges, the up to three possible heights are transformed into discrete disparities. A optimal disparity is selected from a reasonable and computational efficient range centered on them. If the optimal disparity is selected, but still the stereo pair found is wrong, a change has been found. A Markov random field (MRF) with built-in edge awareness from images is designed to find optimal disparity. By segmenting the pixels into plane and edge segments, the global optimization problem is split into many local ones which makes the optimization very efficient. Using an optimization and a consecutive occlusion consistency check, the changes are derived from stereo pairs having high color difference. The algorithm is tested to find changes in an urban areas in the city of Amersfoort, the Netherlands. The two different test cases show that the algorithm is indeed efficient. The optimized disparity images have sharp edges along those of images and false alarms of changes near or on edges and occlusions are largely reduced.


2008 ◽  
pp. 239-276
Author(s):  
Ian P. Howard ◽  
Brian J. Rogers

Sign in / Sign up

Export Citation Format

Share Document