scholarly journals Seeing Depth with One Eye and Pictorial Space

2021 ◽  
Author(s):  
Paul Linton

In my second post I questioned whether the integration of pictorial cues and binocular disparity occurs at the level of perception. In this third post, I push the argument further by questioning whether pictorial cues contribute to 3D vision at all.

1992 ◽  
Vol 4 (4) ◽  
pp. 573-589 ◽  
Author(s):  
Daniel Kersten ◽  
Heinrich H. Bülthoff ◽  
Bennett L. Schwartz ◽  
Kenneth J. Kurtz

It is well known that the human visual system can reconstruct depth from simple random-dot displays given binocular disparity or motion information. This fact has lent support to the notion that stereo and structure from motion systems rely on low-level primitives derived from image intensities. In contrast, the judgment of surface transparency is often considered to be a higher-level visual process that, in addition to pictorial cues, utilizes stereo and motion information to separate the transparent from the opaque parts. We describe a new illusion and present psychophysical results that question this sequential view by showing that depth from transparency and opacity can override the bias to see rigid motion. The brain's computation of transparency may involve a two-way interaction with the computation of structure from motion.


Perception ◽  
10.1068/p3222 ◽  
2001 ◽  
Vol 30 (7) ◽  
pp. 855-865 ◽  
Author(s):  
Leo Poom

A new visual phenomenon, inter-attribute illusory (completed) contours, is demonstrated. Contour completions are perceived between any combination of spatially separate pairs of inducing elements (Kanizsa-like ‘pacman’ figures) defined either by pictorial cues (luminance contrast or offset gratings), temporal contrast (motion, second-order-motion or ‘phantom’ contours), or binocular-disparity contrast. In a first experiment, observers reported the perceived occurrence of contour completion for all pair combinations of inducing elements. In a second experiment they rated the perceived clarity of the completed contours. Both methods generated similar results—contour completions were perceived even though the inducing elements were defined by different attributes. Ratings of inter-attribute clarity were no lower than in either of the two corresponding intra-attribute conditions and seem to be the average of these two ratings. The results provide evidence for the existence of attribute-invariant Gestalt processes, and on a mechanistic level indicate that the completion process operates on attribute-invariant contour detectors.


2016 ◽  
Vol 4 (1-2) ◽  
pp. 73-105 ◽  
Author(s):  
M.W.A. Wijntjes ◽  
A. Füzy ◽  
M.E.S. Verheij ◽  
T. Deetman ◽  
S.C. Pont

At the start of the 20th century, Moritz von Rohr invented the synopter: a device that removes 3D depth cues that arise from binocular disparities and vergence. In the absence of these visual cues, the observer is less aware of the physical flatness of the picture. This results in a surprisingly increased depth impression of pictorial space, historically known as the ‘plastic effect’. In this paper we present a practical design to produce a synopter and explore which elements of a painting influence the plastic effect. In the first experiment we showed 22 different paintings to a total of 35 observers, and found that they rate the synoptic effect rather consistent over the various paintings. Subsequent analyses indicated that at least three pictorial cues were relevant for the synoptic effect: figure–ground contrast, compositional depth and shadows. In experiment 2, we used manipulated pictures where we tried to strengthen or weaken these cues. In all three cases we found at least one effect that confirmed our hypothesis. We also found substantial individual differences: some observers experience little effect, while others are very surprised by the effect. A stereo acuity test revealed that these differences could not be attributed to how well disparities are detected. Lastly, we informally tested our newly designed synopter in museums and found similar idiosyncratic appraisal. But the device also turned out to facilitate discussions among visitors.


Author(s):  
Joanna Ganczarek ◽  
Vezio Ruggieri ◽  
Marta Olivetti Belardinelli ◽  
Daniele Nardi

2021 ◽  
pp. 103775
Author(s):  
Tuan-Tang Le ◽  
Trung-Son Le ◽  
Yu-Ru Chen ◽  
Joel Vidal ◽  
Chyi-Yeu Lin

Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 54
Author(s):  
Peng Liu ◽  
Zonghua Zhang ◽  
Zhaozong Meng ◽  
Nan Gao

Depth estimation is a crucial component in many 3D vision applications. Monocular depth estimation is gaining increasing interest due to flexible use and extremely low system requirements, but inherently ill-posed and ambiguous characteristics still cause unsatisfactory estimation results. This paper proposes a new deep convolutional neural network for monocular depth estimation. The network applies joint attention feature distillation and wavelet-based loss function to recover the depth information of a scene. Two improvements were achieved, compared with previous methods. First, we combined feature distillation and joint attention mechanisms to boost feature modulation discrimination. The network extracts hierarchical features using a progressive feature distillation and refinement strategy and aggregates features using a joint attention operation. Second, we adopted a wavelet-based loss function for network training, which improves loss function effectiveness by obtaining more structural details. The experimental results on challenging indoor and outdoor benchmark datasets verified the proposed method’s superiority compared with current state-of-the-art methods.


Author(s):  
Yi Liu ◽  
Ming Cong ◽  
Hang Dong ◽  
Dong Liu

Purpose The purpose of this paper is to propose a new method based on three-dimensional (3D) vision technologies and human skill integrated deep learning to solve assembly positioning task such as peg-in-hole. Design/methodology/approach Hybrid camera configuration was used to provide the global and local views. Eye-in-hand mode guided the peg to be in contact with the hole plate using 3D vision in global view. When the peg was in contact with the workpiece surface, eye-to-hand mode provided the local view to accomplish peg-hole positioning based on trained CNN. Findings The results of assembly positioning experiments proved that the proposed method successfully distinguished the target hole from the other same size holes according to the CNN. The robot planned the motion according to the depth images and human skill guide line. The final positioning precision was good enough for the robot to carry out force controlled assembly. Practical implications The developed framework can have an important impact on robotic assembly positioning process, which combine with the existing force-guidance assembly technology as to build a whole set of autonomous assembly technology. Originality/value This paper proposed a new approach to the robotic assembly positioning based on 3D visual technologies and human skill integrated deep learning. Dual cameras swapping mode was used to provide visual feedback for the entire assembly motion planning process. The proposed workpiece positioning method provided an effective disturbance rejection, autonomous motion planning and increased overall performance with depth images feedback. The proposed peg-hole positioning method with human skill integrated provided the capability of target perceptual aliasing avoiding and successive motion decision for the robotic assembly manipulation.


Sign in / Sign up

Export Citation Format

Share Document