Reference Frame Effects on Shape Perception in Two versus Three Dimensions

Perception ◽  
1988 ◽  
Vol 17 (2) ◽  
pp. 147-163 ◽  
Author(s):  
Stephen Palmer ◽  
Edward Simone ◽  
Paul Kube

Three experiments are reported in which it is tested whether the Gestalt effect of configural orientation on shape perception operates on two-dimensional (2-D) or three-dimensional (3-D) representations of space. It is known that gravitationally defined squares and diamonds take longer to discriminate in diagonal arrays than in horizontal or vertical arrays. In the first experiment it is shown that this interference effect decreases dramatically in magnitude when pictorial depth information is added so that subjects perceive the target shapes in different depth planes. In the second experiment this difference is shown not to be due to relative size of the target shapes or to occlusion of a background plane. It is also shown, in the final experiment, that this difference is not due to linear perspective information or merely to perception of the target figures in a 3-D scene. The overall pattern of results supports the position that this configural reference frame effect arises primarily when the elements of the configuration are coplanar, and that the principal organization underlying it is the structure of the perceived 3-D environment rather than that of the 2-D image. In all three experiments, however, there is also a small interference effect in the noncoplanar 3-D conditions. This might be due either to some aspect of reference frame selection operating on the 2-D image representation or to the failure of subjects to see depth in the 3-D stimuli on some proportion of the trials.

1997 ◽  
Vol 6 (5) ◽  
pp. 513-531 ◽  
Author(s):  
R. Troy Surdick ◽  
Elizabeth T. Davis ◽  
Robert A. King ◽  
Larry F. Hodges

The ability effectively and accurately to simulate distance in virtual and augmented reality systems is a challenge currently facing R&D. To examine this issue, we separately tested each of seven visual depth cues (relative brightness, relative size, relative height, linear perspective, foreshortening, texture gradient, and stereopsis) as well as the condition in which all seven of these cues were present and simultaneously providing distance information in a simulated display. The viewing distances were 1 and 2 m. In developing simulated displays to convey distance and depth there are three questions that arise. First, which cues provide effective depth information (so that only a small change in the depth cue results in a perceived change in depth)? Second, which cues provide accurate depth information (so that the perceived distance of two equidistant objects perceptually matches)? Finally, how does the effectiveness and accuracy of these depth cues change as a function of the viewing distance? Ten college-aged subjects were tested with each depth-cue condition at both viewing distances. They were tested using a method of constant stimuli procedure and a modified Wheat-stone stereoscopic display. The perspective cues (linear perspective, foreshortening, and texture gradient) were found to be more effective than other depth cues, while effectiveness of relative brightness was vastly inferior. Moreover, relative brightness, relative height, and relative size all significantly decreased in effectiveness with an increase in viewing distance. The depth cues did not differ in terms of accuracy at either viewing distance. Finally, some subjects experienced difficulty in rapidly perceiving distance information provided by stereopsis, but no subjects had difficulty in effectively and accurately perceiving distance with the perspective information used in our experiment. A second experiment demonstrated that a previously stereo-anomalous subject could be trained to perceive stereoscopic depth in a binocular display. We conclude that the use of perspective cues in simulated displays may be more important than the other depth cues tested because these cues are the most effective and accurate cues at both viewing distances, can be easily perceived by all subjects, and can be readily incorporated into simpler, less complex displays (e.g., biocular HMDs) or more complex ones (e.g., binocular or see-through HMDs).


Author(s):  
Harvey S. Smallman ◽  
Mark St. John ◽  
Michael B. Cowen

Despite the increasing prevalence of three-dimensional (3-D) perspective views of scenes, there remain a number of concerns about their utility, particularly for precise relative position tasks. Here, we empirically measure and then mathematically model the perceptual biases found in participants' perceptual reconstruction of perspective views. Participants reconstructed the length of 10 test posts scattered across a 3-D scene to match the physical length of a reference post. The test posts were all oriented in the X, Y or Z cardinal directions of 3-D space. Four viewing angles from 90 degrees (“2-D”) down to 22.5 degrees (“3-D”) were used. Matches systematically underestimated the compression of distances into the scene (Y) and systematically overestimated the compression of height (Z). A simple computational model is developed to account for the results that posits that linear perspective (that only operates in X) is inappropriately used to scale matching lengths in all three dimensions of space. The model suggests a novel account of the systematic underestimation of egocentric distances in the real world.


Perception ◽  
1994 ◽  
Vol 23 (4) ◽  
pp. 453-470 ◽  
Author(s):  
Glyn W Humphreys ◽  
Nicole Keulers ◽  
Nick Donnelly

Evidence from visual-search experiments is discussed that indicates that there is spatially parallel encoding based on three-dimensional (3-D) spatial relations between complex image features. In one paradigm, subjects had to detect an odd part of cube-like figures, formed by grouping of corner junctions. Performance with cube-like figures was unaffected by the number of corner junctions present, though performance was affected when the corners did not configure into a cube. It is suggested from the data that junctions can be grouped to form 3-D shapes in a spatially parallel manner. Further, performance with cube-like figures was more robust to noncollinearity between junctions than was performance when junctions grouped to form two-dimensional planes. In the second paradigm, subjects searched for targets defined by their size. Performance was affected by a size illusion, induced by linear-perspective cues from local background neighbourhoods. Search was made more efficient when the size illusion was consistent with the real size difference between targets and nontargets, and it was made less efficient when the size illusion was inconsistent with the real size difference. This last result occurred even though search was little affected by the display size in a control condition. We suggest that early, parallel visual processes are influenced by 3-D spatial relations between visual elements, that grouping based on 3-D spatial relations is relatively robust to noncollinearity between junctions, and that, at least in some circumstances, 3-D relations dominate those coded in two-dimensions.


1993 ◽  
Vol 4 (2) ◽  
pp. 93-98 ◽  
Author(s):  
Virginia M. Gunderson ◽  
Albert Yonas ◽  
Patricia L. Sargent ◽  
Kimberly S. Grant-Webster

The studies described here are the first to demonstrate that a nonhuman primate species is capable of responding to pictorial depth information during infancy. In two experiments, pigtailed macaque ( Macaca nemestrina) infants were tested for responsivity to the pictorial depth cues of texture gradient/linear perspective and relative size. The procedures were adapted from human studies and are based on the proclivity of infants to reach more frequently to closer objects than to objects that are farther away. The stimulus displays included two equidistant objects that, when viewed monocularly, appear separated in space because of an illusion created by pictorial depth cues. When presented with these displays, animals reached significantly more often to the apparently closer objects under monocular conditions than under binocular conditions. These findings suggest that infant macaques are sensitive to pictorial depth information, the implication being that this ability has ancient phylogenetic origins and is not learned from exposure to the conventions of Western art.


Author(s):  
Sree Shankar ◽  
Rahul Rai

AbstractPrimary among all the activities involved in conceptual design is freehand sketching. There have been significant efforts in recent years to enable digital design methods that leverage humans’ sketching skills. Conventional sketch-based digital interfaces are built on two-dimensional touch-based devices like sketchers and drawing pads. The transition from two-dimensional to three-dimensional (3-D) digital sketch interfaces represents the latest trend in developing new interfaces that embody intuitiveness and human–human interaction characteristics. In this paper, we outline a novel screenless 3-D sketching system. The system uses a noncontact depth-sensing RGB-D camera for user input. Only depth information (no RGB information) is used in the framework. The system tracks the user's palm during the sketching process and converts the data into a 3-D sketch. As the generated data is noisy, making sense of what is sketched is facilitated through a beautification process that is suited to 3-D sketches. To evaluate the performance of the system and the beautification scheme, user studies were performed on multiple participants for both single-stroke and multistroke sketching scenarios.


Author(s):  
R. Troy Surdick ◽  
Elizabeth T. Davis ◽  
Robert A. King ◽  
Gregory M. Corso ◽  
Alexander Shapiro ◽  
...  

We tested seven visual depth cues (relative brightness, relative size, relative height, linear perspective, foreshortening, texture gradient, and stereopsis) at viewing distances of one and two meters to answer two questions. First, which cues provide effective depth information (i.e., only a small change in the depth cue results in a noticeable change in perceived depth). Second, how does the effectiveness of these depth cues change as a function of the viewing distance? Six college-aged subjects were tested with each depth cue at both viewing distances. They were tested using a method of constant stimuli procedure and a modified Wheatstone stereoscopic display. Accuracies for perceptual match settings for all cues were very high (mean constant errors were near zero), and no cues were significantly more or less accurate than any others. Effectiveness of the perspective cues (linear perspective, foreshortening, and texture gradient) was superior to that of other depth cues, while effectiveness of relative brightness was vastly inferior. Moreover, stereopsis, among the more effective cues at one meter, was significantly less so at two meters. These results have theoretical implications for models of human spatial perception and practical implications for the design and development of 3D virtual environments.


2013 ◽  
Vol 109 (3) ◽  
pp. 873-888 ◽  
Author(s):  
Jeffrey S. Taube ◽  
Sarah S. Wang ◽  
Stanley Y. Kim ◽  
Russell J. Frohardt

Many species navigate in three dimensions and are required to maintain accurate orientation while moving in an Earth vertical plane. Here we explored how head direction (HD) cells in the rat anterodorsal thalamus responded when rats locomoted along a 360° spiral track that was positioned vertically within the room at the N, S, E, or W location. Animals were introduced into the vertical plane either through passive placement ( experiment 1) or by allowing them to run up a 45° ramp from the floor to the vertically positioned platform ( experiment 2). In both experiments HD cells maintained direction-specific firing in the vertical plane with firing properties that were indistinguishable from those recorded in the horizontal plane. Interestingly, however, the cells' preferred directions were linked to different aspects of the animal's environment and depended on how the animal transitioned into the vertical plane. When animals were passively placed onto the vertical surface, the cells switched from using the room (global cues) as a reference frame to using the vertically positioned platform (local cues) as a reference frame, independent of where the platform was located. In contrast, when animals self-locomoted into the vertical plane, the cells' preferred directions remained anchored to the three-dimensional room coordinates and their activity could be accounted for by a simple 90° rotation of the floor's horizontal coordinate system to the vertical plane. These findings highlight the important role that active movement signals play for maintaining and updating spatial orientation when moving in three dimensions.


2009 ◽  
Vol 102 (2) ◽  
pp. 805-816 ◽  
Author(s):  
Rajan Bhattacharyya ◽  
Sam Musallam ◽  
Richard A. Andersen

Performing a visually guided reach requires the ability to perceive the egocentric distance of a target in three-dimensional space. Previous studies have shown that the parietal reach region (PRR) encodes the two-dimensional location of frontoparallel targets in an eye-centered reference frame. To investigate how a reach target is represented in three dimensions, we recorded the spiking activity of PRR neurons from two rhesus macaques trained to fixate and perform memory reaches to targets at different depths. Reach and fixation targets were configured to explore whether neural activity directly reflects egocentric distance as the amplitude of the required motor command, which is the absolute depth of the target, or rather the relative depth of the target with reference to fixation depth. We show that planning activity in PRR represents the depth of the reach target as a function of disparity and fixation depth, the spatial parameters important for encoding the depth of a reach goal in an eye centered reference frame. The strength of modulation by disparity is maintained across fixation depth. Fixation depth gain modulates disparity tuning while preserving the location of peak tuning features in PRR neurons. The results show that individual PRR neurons code depth with respect to the fixation point, that is, in eye centered coordinates. However, because the activity is gain modulated by vergence angle, the absolute depth can be decoded from the population activity.


Author(s):  
J. A. Eades ◽  
A. E. Smith ◽  
D. F. Lynch

It is quite simple (in the transmission electron microscope) to obtain convergent-beam patterns from the surface of a bulk crystal. The beam is focussed onto the surface at near grazing incidence (figure 1) and if the surface is flat the appropriate pattern is obtained in the diffraction plane (figure 2). Such patterns are potentially valuable for the characterization of surfaces just as normal convergent-beam patterns are valuable for the characterization of crystals.There are, however, several important ways in which reflection diffraction from surfaces differs from the more familiar electron diffraction in transmission.GeometryIn reflection diffraction, because of the surface, it is not possible to describe the specimen as periodic in three dimensions, nor is it possible to associate diffraction with a conventional three-dimensional reciprocal lattice.


1997 ◽  
Vol 84 (1) ◽  
pp. 176-178
Author(s):  
Frank O'Brien

The author's population density index ( PDI) model is extended to three-dimensional distributions. A derived formula is presented that allows for the calculation of the lower and upper bounds of density in three-dimensional space for any finite lattice.


Sign in / Sign up

Export Citation Format

Share Document