Shared Computational Mechanism for Tilt Compensation Accounts for Biased Verticality Percepts in Motion and Pattern Vision

2008 ◽  
Vol 99 (2) ◽  
pp. 915-930 ◽  
Author(s):  
M. De Vrijer ◽  
W. P. Medendorp ◽  
J.A.M. Van Gisbergen

To determine the direction of object motion in external space, the brain must combine retinal motion signals and information about the orientation of the eyes in space. We assessed the accuracy of this process in eight laterally tilted subjects who aligned the motion direction of a random-dot pattern (30% coherence, moving at 6°/s) with their perceived direction of gravity (motion vertical) in otherwise complete darkness. For comparison, we also tested the ability to align an adjustable visual line (12° diameter) to the direction of gravity (line vertical). For small head tilts (<40°), systematic errors in either task were almost negligible. In contrast, tilts >60° revealed a pattern of large systematic errors (often >30°) that was virtually identical in both tasks. Regression analysis confirmed that mean errors in the two tasks were closely related, with slopes close to 1.0 and correlations >0.89. Control experiments ruled out that motion settings were based on processing of individual single-dot paths. We conclude that the conversion of both motion direction and line orientation on the retina into a spatial frame of reference involves a shared computational strategy. Simulations with two spatial-orientation models suggest that the pattern of systematic errors may be the downside of an optimal strategy for dealing with imperfections in the tilt signal that is implemented before the reference-frame transformation.

2007 ◽  
Vol 98 (2) ◽  
pp. 966-983 ◽  
Author(s):  
Aaron P. Batista ◽  
Gopal Santhanam ◽  
Byron M. Yu ◽  
Stephen I. Ryu ◽  
Afsheen Afshar ◽  
...  

When a human or animal reaches out to grasp an object, the brain rapidly computes a pattern of muscular contractions that can acquire the target. This computation involves a reference frame transformation because the target's position is initially available only in a visual reference frame, yet the required control signal is a set of commands to the musculature. One of the core brain areas involved in visually guided reaching is the dorsal aspect of the premotor cortex (PMd). Using chronically implanted electrode arrays in two Rhesus monkeys, we studied the contributions of PMd to the reference frame transformation for reaching. PMd neurons are influenced by the locations of reach targets relative to both the arm and the eyes. Some neurons encode reach goals using limb-centered reference frames, whereas others employ eye-centered reference fames. Some cells encode reach goals in a reference frame best described by the combined position of the eyes and hand. In addition to neurons like these where a reference frame could be identified, PMd also contains cells that are influenced by both the eye- and limb-centered locations of reach goals but for which a distinct reference frame could not be determined. We propose two interpretations for these neurons. First, they may encode reach goals using a reference frame we did not investigate, such as intrinsic reference frames. Second, they may not be adequately characterized by any reference frame.


2020 ◽  
Vol 117 (50) ◽  
pp. 32165-32168
Author(s):  
Arvid Guterstam ◽  
Michael S. A. Graziano

Recent evidence suggests a link between visual motion processing and social cognition. When person A watches person B, the brain of A apparently generates a fictitious, subthreshold motion signal streaming from B to the object of B’s attention. These previous studies, being correlative, were unable to establish any functional role for the false motion signals. Here, we directly tested whether subthreshold motion processing plays a role in judging the attention of others. We asked, if we contaminate people’s visual input with a subthreshold motion signal streaming from an agent to an object, can we manipulate people’s judgments about that agent’s attention? Participants viewed a display including faces, objects, and a subthreshold motion hidden in the background. Participants’ judgments of the attentional state of the faces was significantly altered by the hidden motion signal. Faces from which subthreshold motion was streaming toward an object were judged as paying more attention to the object. Control experiments showed the effect was specific to the agent-to-object motion direction and to judging attention, not action or spatial orientation. These results suggest that when the brain models other minds, it uses a subthreshold motion signal, streaming from an individual to an object, to help represent attentional state. This type of social-cognitive model, tapping perceptual mechanisms that evolved to process physical events in the real world, may help to explain the extraordinary cultural persistence of beliefs in mind processes having physical manifestation. These findings, therefore, may have larger implications for human psychology and cultural belief.


Author(s):  
Stephen Grossberg

This chapter shows how visual illusions arise from neural processes that play an adaptive role in achieving the remarkable perceptual capabilities of advanced brains. It clarifies that many visual percepts are visual illusions, in the sense that they arise from active processes that reorganize and complete perceptual representations from the noisy data received by retinas. Some of these representations look illusory, whereas others look real. The chapter heuristically summarizes explanations of illusions that arise due to completion of perceptual groupings, filling-in of surface lightnesses and colors, transformation of ambiguous motion signals into coherent percepts of object motion direction and speed, and interactions between the form and motion cortical processing streams. A central theme is that the brain is organized into parallel processing streams with computationally complementary properties, that interstream interactions overcome these complementary deficiencies to compute effective representations of the world, and how these representations generate visual illusions.


2006 ◽  
Vol 96 (6) ◽  
pp. 3545-3550 ◽  
Author(s):  
Anna Montagnini ◽  
Miriam Spering ◽  
Guillaume S. Masson

Smooth pursuit eye movements reflect the temporal dynamics of bidimensional (2D) visual motion integration. When tracking a single, tilted line, initial pursuit direction is biased toward unidimensional (1D) edge motion signals, which are orthogonal to the line orientation. Over 200 ms, tracking direction is slowly corrected to finally match the 2D object motion during steady-state pursuit. We now show that repetition of line orientation and/or motion direction does not eliminate the transient tracking direction error nor change the time course of pursuit correction. Nonetheless, multiple successive presentations of a single orientation/direction condition elicit robust anticipatory pursuit eye movements that always go in the 2D object motion direction not the 1D edge motion direction. These results demonstrate that predictive signals about target motion cannot be used for an efficient integration of ambiguous velocity signals at pursuit initiation.


2015 ◽  
Author(s):  
Manivannan Subramaniyan ◽  
Alexander S. Ecker ◽  
Saumil S. Patel ◽  
R. James Cotton ◽  
Matthias Bethge ◽  
...  

AbstractWhen the brain has determined the position of a moving object, due to anatomical and processing delays, the object will have already moved to a new location. Given the statistical regularities present in natural motion, the brain may have acquired compensatory mechanisms to minimize the mismatch between the perceived and the real position of a moving object. A well-known visual illusion — the flash lag effect — points towards such a possibility. Although many psychophysical models have been suggested to explain this illusion, their predictions have not been tested at the neural level, particularly in a species of animal known to perceive the illusion. Towards this, we recorded neural responses to flashed and moving bars from primary visual cortex (V1) of awake, fixating macaque monkeys. We found that the response latency to moving bars of varying speed, motion direction and luminance was shorter than that to flashes, in a manner that is consistent with psychophysical results. At the level of V1, our results support the differential latency model positing that flashed and moving bars have different latencies. As we found a neural correlate of the illusion in passively fixating monkeys, our results also suggest that judging the instantaneous position of the moving bar at the time of flash — as required by the postdiction/motion-biasing model — may not be necessary for observing a neural correlate of the illusion. Our results also suggest that the brain may have evolved mechanisms to process moving stimuli faster and closer to real time compared with briefly appearing stationary stimuli.New and NoteworthyWe report several observations in awake macaque V1 that provide support for the differential latency model of the flash lag illusion. We find that the equal latency of flash and moving stimuli as assumed by motion integration/postdiction models does not hold in V1. We show that in macaque V1, motion processing latency depends on stimulus luminance, speed and motion direction in a manner consistent with several psychophysical properties of the flash lag illusion.


1998 ◽  
Vol 80 (5) ◽  
pp. 2274-2294 ◽  
Author(s):  
Eliana M. Klier ◽  
J. Douglas Crawford

Klier, Eliana M. and J. Douglas Crawford. Human oculomotor system accounts for 3-D eye orientation in the visual-motor transformation for saccades. J. Neurophysiol. 80: 2274–2294, 1998. A recent theoretical investigation has demonstrated that three-dimensional (3-D) eye position dependencies in the geometry of retinal stimulation must be accounted for neurally (i.e., in a visuomotor reference frame transformation) if saccades are to be both accurate and obey Listing's law from all initial eye positions. Our goal was to determine whether the human saccade generator correctly implements this eye-to-head reference frame transformation (RFT), or if it approximates this function with a visuomotor look-up table (LT). Six head-fixed subjects participated in three experiments in complete darkness. We recorded 60° horizontal saccades between five parallel pairs of lights, over a vertical range of ±40° ( experiment 1), and 30° radial saccades from a central target, with the head upright or tilted 45° clockwise/counterclockwise to induce torsional ocular counterroll, under both binocular and monocular viewing conditions ( experiments 2 and 3). 3-D eye orientation and oculocentric target direction (i.e., retinal error) were computed from search coil signals in the right eye. Experiment 1: as predicted, retinal error was a nontrivial function of both target displacement in space and 3-D eye orientation (e.g., horizontally displaced targets could induce horizontal or oblique retinal errors, depending on eye position). These data were input to a 3-D visuomotor LT model, which implemented Listing's law, but predicted position-dependent errors in final gaze direction of up to 19.8°. Actual saccades obeyed Listing's law but did not show the predicted pattern of inaccuracies in final gaze direction, i.e., the slope of actual error, as a function of predicted error, was only −0.01 ± 0.14 (compared with 0 for RFT model and 1.0 for LT model), suggesting near-perfect compensation for eye position. Experiments 2 and 3: actual directional errors from initial torsional eye positions were only a fraction of those predicted by the LT model (e.g., 32% for clockwise and 33% for counterclockwise counterroll during binocular viewing). Furthermore, any residual errors were immediately reduced when visual feedback was provided during saccades. Thus, other than sporadic miscalibrations for torsion, saccades were accurate from all 3-D eye positions. We conclude that 1) the hypothesis of a visuomotor look-up table for saccades fails to account even for saccades made directly toward visual targets, but rather, 2) the oculomotor system takes 3-D eye orientation into account in a visuomotor reference frame transformation. This transformation is probably implemented physiologically between retinotopically organized saccade centers (in cortex and superior colliculus) and the brain stem burst generator.


2019 ◽  
Vol 116 (18) ◽  
pp. 9060-9065 ◽  
Author(s):  
Kalpana Dokka ◽  
Hyeshin Park ◽  
Michael Jansen ◽  
Gregory C. DeAngelis ◽  
Dora E. Angelaki

The brain infers our spatial orientation and properties of the world from ambiguous and noisy sensory cues. Judging self-motion (heading) in the presence of independently moving objects poses a challenging inference problem because the image motion of an object could be attributed to movement of the object, self-motion, or some combination of the two. We test whether perception of heading and object motion follows predictions of a normative causal inference framework. In a dual-report task, subjects indicated whether an object appeared stationary or moving in the virtual world, while simultaneously judging their heading. Consistent with causal inference predictions, the proportion of object stationarity reports, as well as the accuracy and precision of heading judgments, depended on the speed of object motion. Critically, biases in perceived heading declined when the object was perceived to be moving in the world. Our findings suggest that the brain interprets object motion and self-motion using a causal inference framework.


2000 ◽  
Vol 17 (2) ◽  
pp. 263-271 ◽  
Author(s):  
HIROYUKI UCHIYAMA ◽  
TAKAHIDE KANAYA ◽  
SHOICHI SONOHATA

One type of retinal ganglion cells prefers object motion in a particular direction. Neuronal mechanisms for the computation of motion direction are still unknown. We quantitatively mapped excitatory and inhibitory regions of receptive fields for directionally selective retinal ganglion cells in the Japanese quail, and found that the inhibitory regions are displaced about 1–3 deg toward the side where the null sweep starts, relative to the excitatory regions. Directional selectivity thus results from delayed transient suppression exerted by the nonconcentrically arranged inhibitory regions, and not by local directional inhibition as hypothesized by Barlow and Levick (1965).


2007 ◽  
Vol 24 (3) ◽  
pp. 399-407 ◽  
Author(s):  
MARTIN GEHRES ◽  
CHRISTA NEUMEYER

Large field motion detection in goldfish, measured in the optomotor response, is based on the L-cone type, and is therefore color-blind (Schaerer & Neumeyer, 1996). In experiments using a two-choice training procedure, we investigated now whether the same holds for the detection of a small moving object (size: 8 mm diameter; velocity: 7 cm/s). In initial experiments, we found that goldfish did not discriminate between a moving and a stationary stimulus, obviously not taking attention to the cue “moving.” Therefore, random dot patterns were used in which the stimulus was visible only when moving. Using black and white random dot patterns with variable contrast between 0.2 and 1, we found that the fish could see motion only with high (0.8) contrast. In the decisive experiment, a red-green random dot pattern was used. By keeping the intensity of the red dots constant and reducing the intensity of the green dots, a narrow intensity range was found in which goldfish could no longer discriminate between the moving random dot stimulus in random dot surround and the stationary random dot pattern. The same was the case when a red moving disk was presented in green surround. This is the evidence that object motion is red-green color blind, i.e., color information cannot be used to detect the moving object. Calculations of the cone excitation values revealed that the M-cone type is decisive, as this cone type (and not the L-cone type) is not modulated by that particular red-green pattern in which the moving stimulus was invisible.


Sign in / Sign up

Export Citation Format

Share Document