scholarly journals Border-ownership-dependent tilt aftereffect for shape defined by binocular disparity and motion parallax

2018 ◽  
Author(s):  
Reuben Rideaux ◽  
William J Harrison

ABSTRACTDiscerning objects from their surrounds (i.e., figure-ground segmentation) in a way that guides adaptive behaviours is a fundamental task of the brain. Neurophysiological work has revealed a class of cells in the macaque visual cortex that may be ideally suited to support this neural computation: border-ownership cells (Zhou, Friedman, & von der Heydt, 2000). These orientation-tuned cells appear to respond conditionally to the borders of objects. A behavioural correlate supporting the existence of these cells in humans was demonstrated using two-dimensional luminance defined objects (von der Heydt, Macuda, & Qiu, 2005). However, objects in our natural visual environments are often signalled by complex cues, such as motion and depth order. Thus, for border-ownership systems to effectively support figure-ground segmentation and object depth ordering, they must have access to information from multiple depth cues with strict depth order selectivity. Here we measure in humans (of both sexes) border-ownership-dependent tilt aftereffects after adapting to figures defined by either motion parallax or binocular disparity. We find that both depth cues produce a tilt aftereffect that is selective for figure-ground depth order. Further, we find the effects of adaptation are transferable between cues, suggesting that these systems may combine depth cues to reduce uncertainty (Bülthoff & Mallot, 1988). These results suggest that border-ownership mechanisms have strict depth order selectivity and access to multiple depth cues that are jointly encoded, providing compelling psychophysical support for their role in figure-ground segmentation in natural visual environments.SIGNIFICANCE STATEMENTSegmenting a visual object from its surrounds is a critical function that may be supported by “border-ownership” neural systems that conditionally respond to object borders. Psychophysical work indicates these systems are sensitive to objects defined by luminance contrast. To effectively support figure-ground segmentation, however, neural systems supporting border-ownership must have access to information from multiple depth cues and depth order selectivity. We measured border-ownership-dependent tilt aftereffects to figures defined by either motion parallax or binocular disparity and found aftereffects for both depth cues. These effects were transferable between cues, but selective for figure-ground depth order. Our results suggest that the neural systems supporting figure-ground segmentation have strict depth order selectivity and access to multiple depth cues that are jointly encoded.

2019 ◽  
Vol 121 (5) ◽  
pp. 1917-1923 ◽  
Author(s):  
Reuben Rideaux ◽  
William J. Harrison

Discerning objects from their surrounds (i.e., figure-ground segmentation) in a way that guides adaptive behaviors is a fundamental task of the brain. Neurophysiological work has revealed a class of cells in the macaque visual cortex that may be ideally suited to support this neural computation: border ownership cells (Zhou H, Friedman HS, von der Heydt R. J Neurosci 20: 6594–6611, 2000). These orientation-tuned cells appear to respond conditionally to the borders of objects. A behavioral correlate supporting the existence of these cells in humans was demonstrated with two-dimensional luminance-defined objects (von der Heydt R, Macuda T, Qiu FT. J Opt Soc Am A Opt Image Sci Vis 22: 2222–2229, 2005). However, objects in our natural visual environments are often signaled by complex cues, such as motion and binocular disparity. Thus for border ownership systems to effectively support figure-ground segmentation and object depth ordering, they must have access to information from multiple depth cues with strict depth order selectivity. Here we measured in humans (of both sexes) border ownership-dependent tilt aftereffects after adaptation to figures defined by either motion parallax or binocular disparity. We find that both depth cues produce a tilt aftereffect that is selective for figure-ground depth order. Furthermore, we find that the effects of adaptation are transferable between cues, suggesting that these systems may combine depth cues to reduce uncertainty (Bülthoff HH, Mallot HA. J Opt Soc Am A 5: 1749–1758, 1988). These results suggest that border ownership mechanisms have strict depth order selectivity and access to multiple depth cues that are jointly encoded, providing compelling psychophysical support for their role in figure-ground segmentation in natural visual environments. NEW & NOTEWORTHY Figure-ground segmentation is a critical function that may be supported by “border ownership” neural systems that conditionally respond to object borders. We measured border ownership-dependent tilt aftereffects to figures defined by motion parallax or binocular disparity and found aftereffects for both cues. These effects were transferable between cues but selective for figure-ground depth order, suggesting that the neural systems supporting figure-ground segmentation have strict depth order selectivity and access to multiple depth cues that are jointly encoded.


Perception ◽  
1988 ◽  
Vol 17 (2) ◽  
pp. 255-266 ◽  
Author(s):  
Hiroshi Ono ◽  
Brian J Rogers ◽  
Masao Ohmi ◽  
Mika E Ono

Random-dot techniques were used to examine the interactions between the depth cues of dynamic occlusion and motion parallax in the perception of three-dimensional (3-D) structures, in two different situations: (a) when an observer moved laterally with respect to a rigid 3-D structure, and (b) when surfaces at different distances moved with respect to a stationary observer. In condition (a), the extent of accretion/deletion (dynamic occlusion) and the amount of relative motion (motion parallax) were both linked to the motion of the observer. When the two cues specified opposite, and therefore contradictory, depth orders, the perceived order in depth of the simulated surfaces was dependent on the magnitude of the depth separation. For small depth separations, motion parallax determined the perceived order, whereas for large separations it was determined by dynamic occlusion. In condition (b), where the motion parallax cues for depth order were inherently ambiguous, depth order was determined principally by the unambiguous occlusion information.


2012 ◽  
Vol 25 (0) ◽  
pp. 31
Author(s):  
Michiteru Kitazaki

Since the speed of sound is much slower than light, we sometimes hear a sound later than an accompanying light event (e.g., thunder and lightning at a far distance). However, Sugita and Suzuki (2003) reported that our brain coordinates a sound and its accompanying light to be perceived simultaneously within 20 m distance. Thus, the light accompanied with physically delayed sound is perceived simultaneously with the sound in near field. We aimed to test if this sound–light coordination occurs in a virtual-reality environment and investigate effects of binocular disparity and motion parallax. Six naive participants observed visual stimuli on a 120-inch screen in a darkroom and heard auditory stimuli from a headphone. A ball was presented in a textured corridor and its distance from the participant was varied from 3–20 m. The ball changed to be in red before or after a short (10 ms) white noise (time difference: −120, −60, −30, 0, +30, +60, +120 ms), and participants judged temporal order of the color-change and the sound. We varied visual depth cues (binocular disparity and motion parallax) in the virtual-reality environment, and measured the physical delay at which visual and auditory events were perceived simultaneously. In terms of the results, we did not find sound–light coordination without binocular disparity or motion parallax, but found it with both cues. These results suggest that binocular disparity and motion parallax are effective for sound–light coordination in virtual-reality environment, and richness of depth cues are important for the coordination.


2021 ◽  
Author(s):  
HyungGoo Kim ◽  
Dora Angelaki ◽  
Gregory DeAngelis

Detecting objects that move in a scene is a fundamental computation performed by the visual system. This computation is greatly complicated by observer motion, which causes most objects to move across the retinal image. How the visual system detects scene-relative object motion during self-motion is poorly understood. Human behavioral studies suggest that the visual system may identify local conflicts between motion parallax and binocular disparity cues to depth, and may use these signals to detect moving objects. We describe a novel mechanism for performing this computation based on neurons in macaque area MT with incongruent depth tuning for binocular disparity and motion parallax cues. Neurons with incongruent tuning respond selectively to scene-relative object motion and their responses are predictive of perceptual decisions when animals are trained to detect a moving object during selfmotion. This finding establishes a novel functional role for neurons with incongruent tuning for multiple depth cues.


Perception ◽  
10.1068/p3342 ◽  
2003 ◽  
Vol 32 (2) ◽  
pp. 131-153 ◽  
Author(s):  
Makoto Ichikawa ◽  
Takahiko Kimura ◽  
Hiroyuki Egusa ◽  
Makiko Nakatsuka ◽  
Jun Amano ◽  
...  

For 35 to 39 days, four observers wore continuously left–right reversing spectacles which pseudoscopically reverse the order of binocular disparity and direction of convergence. In three tests, we investigated how the visual system copes with the transformation of depth and distance information due to the reversing spectacles. In stereogram observation, after a few days of wearing the spectacles, the observers sometimes perceived a depth order which was opposite to the depth order that they had perceived in the pre-spectacle-wearing period. Monocular depth cues contributed more to depth perception in the spectacle-wearing period than they did in the pre-spectacle-wearing period. While the perceived distance significantly decreased during the spectacle-wearing period, we found no evidence of adaptive change in distance perception. The results indicate that the visual system adapts itself to the transformed situation by not only changing the processing of disparity but also by changing the relative efficiency of each cue in determining apparent depth.


2020 ◽  
Vol 11 (1) ◽  
pp. 3
Author(s):  
Laura Gonçalves Ribeiro ◽  
Olli J. Suominen ◽  
Ahmed Durmush ◽  
Sari Peltonen ◽  
Emilio Ruiz Morales ◽  
...  

Visual technologies have an indispensable role in safety-critical applications, where tasks must often be performed through teleoperation. Due to the lack of stereoscopic and motion parallax depth cues in conventional images, alignment tasks pose a significant challenge to remote operation. In this context, machine vision can provide mission-critical information to augment the operator’s perception. In this paper, we propose a retro-reflector marker-based teleoperation aid to be used in hostile remote handling environments. The system computes the remote manipulator’s position with respect to the target using a set of one or two low-resolution cameras attached to its wrist. We develop an end-to-end pipeline of calibration, marker detection, and pose estimation, and extensively study the performance of the overall system. The results demonstrate that we have successfully engineered a retro-reflective marker from materials that can withstand the extreme temperature and radiation levels of the environment. Furthermore, we demonstrate that the proposed maker-based approach provides robust and reliable estimates and significantly outperforms a previous stereo-matching-based approach, even with a single camera.


Author(s):  
Bin Wang ◽  
Tianyi Yan ◽  
Jinglong Wu

Face perception is considered the most developed visual perceptual skill in humans. Functional magnetic resonance imaging (fMRI) studies have graphically illustrated that multiple regions exhibit a stronger neural response to faces than to other visual object categories, which were specialized for face processing. These regions are in the lateral side of the fusiform gyrus, the “fusiform face area” or FFA, in the inferior occipital gyri, the “occipital face area” or OFA, and in the superior temporal sulcus (pSTS). These regions are supposed to perform the visual analysis of faces and appear to participate differentially in different types of face perception. An important question is how faces are represented within these areas. In this chapter, the authors review the function, interaction, and topography of these regions relevant to face perception. They also discuss the human neural systems that mediate face perception and attempt to show some research dictions for face perception and neural representations.


2006 ◽  
Vol 46 (17) ◽  
pp. 2636-2644 ◽  
Author(s):  
Mark F. Bradshaw ◽  
Paul B. Hibbard ◽  
Andrew D. Parton ◽  
David Rose ◽  
Keith Langley

Sign in / Sign up

Export Citation Format

Share Document