scholarly journals Joint Representation of Depth from Motion Parallax and Binocular Disparity Cues in Macaque Area MT

2013 ◽  
Vol 33 (35) ◽  
pp. 14061-14074 ◽  
Author(s):  
J. W. Nadler ◽  
D. Barbash ◽  
H. R. Kim ◽  
S. Shimpi ◽  
D. E. Angelaki ◽  
...  
2015 ◽  
Vol 113 (5) ◽  
pp. 1545-1555 ◽  
Author(s):  
Douglas A. Ruff ◽  
Richard T. Born

Attending to a stimulus modulates the responses of sensory neurons that represent features of that stimulus, a phenomenon named “feature attention.” For example, attending to a stimulus containing upward motion enhances the responses of upward-preferring direction-selective neurons in the middle temporal area (MT) and suppresses the responses of downward-preferring neurons, even when the attended stimulus is outside of the spatial receptive fields of the recorded neurons (Treue S, Martinez-Trujillo JC. Nature 399: 575–579, 1999). This modulation renders the representation of sensory information across a neuronal population more selective for the features present in the attended stimulus (Martinez-Trujillo JC, Treue S. Curr Biol 14: 744–751, 2004). We hypothesized that if feature attention modulates neurons according to their tuning preferences, it should also be sensitive to their tuning strength, which is the magnitude of the difference in responses to preferred and null stimuli. We measured how the effects of feature attention on MT neurons in rhesus monkeys ( Macaca mulatta) depended on the relationship between features—in our case, direction of motion and binocular disparity—of the attended stimulus and a neuron's tuning for those features. We found that, as for direction, attention to stimuli containing binocular disparity cues modulated the responses of MT neurons and that the magnitude of the modulation depended on both a neuron's tuning preferences and its tuning strength. Our results suggest that modulation by feature attention may depend not just on which features a neuron represents but also on how well the neuron represents those features.


2006 ◽  
Vol 46 (17) ◽  
pp. 2636-2644 ◽  
Author(s):  
Mark F. Bradshaw ◽  
Paul B. Hibbard ◽  
Andrew D. Parton ◽  
David Rose ◽  
Keith Langley

2018 ◽  
Author(s):  
Reuben Rideaux ◽  
William J Harrison

ABSTRACTDiscerning objects from their surrounds (i.e., figure-ground segmentation) in a way that guides adaptive behaviours is a fundamental task of the brain. Neurophysiological work has revealed a class of cells in the macaque visual cortex that may be ideally suited to support this neural computation: border-ownership cells (Zhou, Friedman, & von der Heydt, 2000). These orientation-tuned cells appear to respond conditionally to the borders of objects. A behavioural correlate supporting the existence of these cells in humans was demonstrated using two-dimensional luminance defined objects (von der Heydt, Macuda, & Qiu, 2005). However, objects in our natural visual environments are often signalled by complex cues, such as motion and depth order. Thus, for border-ownership systems to effectively support figure-ground segmentation and object depth ordering, they must have access to information from multiple depth cues with strict depth order selectivity. Here we measure in humans (of both sexes) border-ownership-dependent tilt aftereffects after adapting to figures defined by either motion parallax or binocular disparity. We find that both depth cues produce a tilt aftereffect that is selective for figure-ground depth order. Further, we find the effects of adaptation are transferable between cues, suggesting that these systems may combine depth cues to reduce uncertainty (Bülthoff & Mallot, 1988). These results suggest that border-ownership mechanisms have strict depth order selectivity and access to multiple depth cues that are jointly encoded, providing compelling psychophysical support for their role in figure-ground segmentation in natural visual environments.SIGNIFICANCE STATEMENTSegmenting a visual object from its surrounds is a critical function that may be supported by “border-ownership” neural systems that conditionally respond to object borders. Psychophysical work indicates these systems are sensitive to objects defined by luminance contrast. To effectively support figure-ground segmentation, however, neural systems supporting border-ownership must have access to information from multiple depth cues and depth order selectivity. We measured border-ownership-dependent tilt aftereffects to figures defined by either motion parallax or binocular disparity and found aftereffects for both depth cues. These effects were transferable between cues, but selective for figure-ground depth order. Our results suggest that the neural systems supporting figure-ground segmentation have strict depth order selectivity and access to multiple depth cues that are jointly encoded.


2019 ◽  
Author(s):  
Paul Linton

AbstractSince Kepler (1604) and Descartes (1638), ‘vergence’ (the angular rotation of the eyes) has been thought of as one of our most important absolute distance cues. But vergence has never been tested as an absolute distance cue divorced from obvious confounding cues such as binocular disparity. In this article we control for these confounding cues for the first time by gradually manipulating vergence, and find that observers fail to accurately judge distance from vergence. We consider a number of different interpretations of these results, and argue that the most principled response to these results is to question the general effectiveness of vergence as an absolute distance cue. Given other absolute distance cues (such as motion parallax and vertical disparities) are limited in application, this poses a real challenge to our contemporary understanding of visual scale.


Author(s):  
Yuichi Sakano ◽  
Yurina Kitaura ◽  
Kyoko Hasegawa ◽  
Roberto Lopez-Gulliver ◽  
Liang Li ◽  
...  

Transparent visualization is used in many fields because it can visualize not only the frontal object but also other important objects behind it. Although in many situations, it would be very important for the 3D structures of the visualized transparent images to be perceived as they are simulated, little is known quantitatively as to how such transparent 3D structures are perceived. To address this question, in the present study, we conducted a psychophysical experiment in which the observers reported the perceived depth magnitude of a transparent object in medical images, presented with a multi-view 3D display. For the visualization, we employed a stochastic point-based rendering (SPBR) method, which was developed recently as a technique for efficient transparent-rendering. Perceived depth of the transparent object was smaller than the simulated depth. We found, however, that such depth underestimation can be alleviated to some extent by (1) applying luminance gradient inherent in the SPBR method, (2) employing high opacities, and (3) introducing binocular disparity and motion parallax produced by a multi-view 3D display.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 99-99 ◽  
Author(s):  
M F Bradshaw ◽  
B De Bruyn ◽  
R A Eagle ◽  
A D Parton

The use of binocular disparity and motion parallax information was compared in three different psychophysical tasks for which natural viewing and physical stimuli were used. Natural viewing may be an important factor in interpreting experiments which have addressed the ability to use disparity and parallax both separately and in combination (see Frisby et al, 1996 Perception25 129 – 154). The stimuli consisted of configurations of three bright LEDs carefully aligned in the horizontal meridian and presented in darkness. The distance of the middle LED (flashing at 5 Hz) could be adjusted along the midline in accordance with the tasks which included: (i) a depth nulling task, (ii) a depth matching task, and (iii) a shape task—match base/height of triangle. Each task was performed at two viewing distances (1.5 and 3.0 m) and under four different viewing conditions: (i) monocular-static, (ii) monocular-moving, (iii) binocular-static, and (iv) binocular-moving. Note that the different tasks differ in their dependence on viewing distance, and the available cues for viewing distance differ between viewing conditions. Four observers made ten settings in each condition at each distance. Observers, as expected, performed badly (bias and accuracy) in all tasks in the monocular-static condition. Nulling was accurate in the other viewing conditions (no estimate of viewing distance required). Performance was best in the matching task (ratio of viewing distances) but although binocular-static was significantly better than monocular-moving performance in this and in the shape task (absolute distance required), there was no additional improvement in the binocular-moving condition. Results show that observers can recover structure accurately from parallax or disparity information in real-world stimuli.


Sign in / Sign up

Export Citation Format

Share Document