scholarly journals Visual motion integration of bidirectional transparent motion in mouse opto-locomotor reflexes

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
L. A. M. H. Kirkels ◽  
W. Zhang ◽  
Z. Rezvani ◽  
R. J. A. van Wezel ◽  
M. M. van Wanrooij

AbstractVisual motion perception depends on readout of direction selective sensors. We investigated in mice whether the response to bidirectional transparent motion, activating oppositely tuned sensors, reflects integration (averaging) or winner-take-all (mutual inhibition) mechanisms. We measured whole body opto-locomotor reflexes (OLRs) to bidirectional oppositely moving random dot patterns (leftward and rightward) and compared the response to predictions based on responses to unidirectional motion (leftward or rightward). In addition, responses were compared to stimulation with stationary patterns. When comparing OLRs to bidirectional and unidirectional conditions, we found that the OLR to bidirectional motion best fits an averaging model. These results reflect integration mechanisms in neural responses to contradicting sensory evidence as has been documented for other sensory and motor domains.

2008 ◽  
Vol 20 (6) ◽  
pp. 1094-1106 ◽  
Author(s):  
Maria Concetta Morrone ◽  
Andrea Guzzetta ◽  
Francesca Tinelli ◽  
Michela Tosetti ◽  
Michela Del Viva ◽  
...  

We report here two cases of two young diplegic patients with cystic periventricular leukomalacia who systematically, and with high sensitivity, perceive translational motion of a random-dot display in the opposite direction. The apparent inversion was specific for translation motion: Rotation and expansion motion were perceived correctly, with normal sensitivity. It was also specific for random-dot patterns, not occurring with gratings. For the one patient that we were able to test extensively, contrast sensitivity for static stimuli was normal, but was very low for direction discrimination at high spatial frequencies and all temporal frequencies. His optokinetic nystagmus movements were normal but he was unable to track a single translating target, indicating a perceptual origin of the tracking deficit. The severe deficit for motion perception was also evident in the seminatural situation of a driving simulation video game. The perceptual deficit for translational motion was reinforced by functional magnetic resonance imaging studies. Translational motion elicited no response in the MT complex, although it did produce a strong response in many visual areas when contrasted with blank stimuli. However, radial and rotational motion produced a normal pattern of activation in a subregion of the MT complex. These data reinforce the existent evidence for independent cortical processing for translational, and circular or radial flow motion, and further suggest that the two systems have different vulnerability and plasticity to prenatal damage. They also highlight the complexity of visual motion perception, and how the delicate balance of neural activity can lead to paradoxical effects such as consistent misperception of the direction of motion. We advance a possible explanation of a reduced spatial sampling of the motion stimuli and report a simple model that simulates well the experimental results.


2017 ◽  
Author(s):  
Tristan A. Chaplin ◽  
Benjamin J. Allitt ◽  
Maureen A. Hagan ◽  
Marcello G.P. Rosa ◽  
Ramesh Rajan ◽  
...  

AbstractThe integration of multiple sensory modalities is a key aspect of brain function, allowing animals to take advantage of concurrent sources of information to make more accurate perceptual judgments. For many years, multisensory integration in the cerebral cortex was deemed to occur only in high-level “polysensory” association areas. However, more recent studies have suggested that cross-modal stimulation can also influence neural activity in areas traditionally considered to be unimodal. In particular, several human neuroimaging studies have reported that extrastriate areas involved in visual motion perception are also activated by auditory motion, and may integrate audio-visual motion cues. However, the exact nature and extent of the effects of auditory motion on the visual cortex have not been studied at the single neuron level. We recorded the spiking activity of neurons in the middle temporal (MT) and medial superior temporal (MST) areas of anesthetized marmoset monkeys upon presentation of unimodal stimuli (moving auditory or visual patterns), as well as bimodal stimuli (concurrent audio-visual motion). Despite robust, direction selective responses to visual motion, none of the sampled neurons responded to auditory motion stimuli. Moreover, concurrent moving auditory stimuli had no significant effect on the ability of single MT and MST neurons, or populations of simultaneously recorded neurons, to discriminate the direction of motion of visual stimuli (moving random dot patterns with varying levels of motion noise). Our findings do not support the hypothesis that direct interactions between MT, MST and areas low in the hierarchy of auditory areas underlie audiovisual motion integration.


1996 ◽  
Vol 13 (4) ◽  
pp. 797-804 ◽  
Author(s):  
Stefan Treue ◽  
Richard A. Andersen

AbstractVisual motion, i.e. the pattern of changes on the retinae caused by the motion of objects or the observer through the environment, contains important cues for the accurate perception of the three-dimensional layout of the visual scene. In this study, we investigate if neurons in the visual system, specifically in area MT of the macaque monkey, are able to differentiate between various velocity gradients. Our stimuli were random dot patterns designed to eliminate stimulus variables other than the orientation of a velocity gradient. We develop a stimulus space (“deformation space”) that allows us to easily parameterize our stimuli. We demonstrate that a substantial proportion of MT cells show tuned responses to our various velocity gradients, often exceeding the response evoked by an optimized flat velocity profile. This suggests that MT cells are able to represent complex aspects of the visual environment and that their properties make them well suited as building blocks for the complex receptive-field properties encountered in higher areas, such as area MST to which many cells in area MT project.


2013 ◽  
Vol 110 (9) ◽  
pp. 2007-2018 ◽  
Author(s):  
Bart Krekelberg ◽  
Richard J. A. van Wezel

Visual motion on the macaque retina is processed by direction- and speed-selective neurons in extrastriate middle temporal cortex (MT). There is strong evidence for a link between the activity of these neurons and direction perception. However, there is conflicting evidence for a link between speed selectivity of MT neurons and speed perception. Here we study this relationship by using a strong perceptual illusion in speed perception: when two transparently superimposed dot patterns move in opposite directions, their apparent speed is much larger than the perceived speed of a single pattern moving at that physical speed. Moreover, the sensitivity for speed discrimination is reduced for such bidirectional patterns. We first confirmed these behavioral findings in human subjects and extended them to a monkey subject. Second, we determined speed tuning curves of MT neurons to bidirectional motion and compared these to speed tuning curves for unidirectional motion. Consistent with previous reports, the response to bidirectional motion was often reduced compared with unidirectional motion at the preferred speed. In addition, we found that tuning curves for bidirectional motion were shifted to lower preferred speeds. As a consequence, bidirectional motion of some speeds typically evoked larger responses than unidirectional motion. Third, we showed that these changes in neural responses could explain changes in speed perception with a simple labeled line decoder. These data provide new insight into the encoding of transparent motion patterns and provide support for the hypothesis that MT activity can be decoded for speed perception with a labeled line model.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Genís Prat-Ortega ◽  
Klaus Wimmer ◽  
Alex Roxin ◽  
Jaime de la Rocha

AbstractPerceptual decisions rely on accumulating sensory evidence. This computation has been studied using either drift diffusion models or neurobiological network models exhibiting winner-take-all attractor dynamics. Although both models can account for a large amount of data, it remains unclear whether their dynamics are qualitatively equivalent. Here we show that in the attractor model, but not in the drift diffusion model, an increase in the stimulus fluctuations or the stimulus duration promotes transitions between decision states. The increase in the number of transitions leads to a crossover between weighting mostly early evidence (primacy) to weighting late evidence (recency), a prediction we validate with psychophysical data. Between these two limiting cases, we found a novel flexible categorization regime, in which fluctuations can reverse initially-incorrect categorizations. This reversal asymmetry results in a non-monotonic psychometric curve, a distinctive feature of the attractor model. Our findings point to correcting decision reversals as an important feature of perceptual decision making.


2019 ◽  
Vol 5 (1) ◽  
pp. 247-268 ◽  
Author(s):  
Peter Thier ◽  
Akshay Markanday

The cerebellar cortex is a crystal-like structure consisting of an almost endless repetition of a canonical microcircuit that applies the same computational principle to different inputs. The output of this transformation is broadcasted to extracerebellar structures by way of the deep cerebellar nuclei. Visually guided eye movements are accommodated by different parts of the cerebellum. This review primarily discusses the role of the oculomotor part of the vermal cerebellum [the oculomotor vermis (OMV)] in the control of visually guided saccades and smooth-pursuit eye movements. Both types of eye movements require the mapping of retinal information onto motor vectors, a transformation that is optimized by the OMV, considering information on past performance. Unlike the role of the OMV in the guidance of eye movements, the contribution of the adjoining vermal cortex to visual motion perception is nonmotor and involves a cerebellar influence on information processing in the cerebral cortex.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Sichao Yang ◽  
Johannes Bill ◽  
Jan Drugowitsch ◽  
Samuel J. Gershman

AbstractMotion relations in visual scenes carry an abundance of behaviorally relevant information, but little is known about how humans identify the structure underlying a scene’s motion in the first place. We studied the computations governing human motion structure identification in two psychophysics experiments and found that perception of motion relations showed hallmarks of Bayesian structural inference. At the heart of our research lies a tractable task design that enabled us to reveal the signatures of probabilistic reasoning about latent structure. We found that a choice model based on the task’s Bayesian ideal observer accurately matched many facets of human structural inference, including task performance, perceptual error patterns, single-trial responses, participant-specific differences, and subjective decision confidence—especially, when motion scenes were ambiguous and when object motion was hierarchically nested within other moving reference frames. Our work can guide future neuroscience experiments to reveal the neural mechanisms underlying higher-level visual motion perception.


2004 ◽  
Vol 14 (5) ◽  
pp. 375-385 ◽  
Author(s):  
E.L. Groen ◽  
W. Bles

We examined to what extent body tilt may augment the perception of visually simulated linear self acceleration. Fourteen subjects judged visual motion profiles of fore-aft motion at four different frequencies between 0.04âĂŞ0.33 Hz, and at three different acceleration amplitudes (0.44, 0.88 and 1.76 m / s 2 ). Simultaneously, subjects were tilted backward and forward about their pitch axis. The amplitude of pitch tilt was systematically varied. Using a two-alternative-forced-choice paradigm, psychometric curves were calculated in order to determine: 1) the minimum tilt amplitude required to generate a linear self-motion percept in more than 50% of the cases, and 2) the maximum tilt amplitude at which rotation remains sub-threshold in more than 50% of the cases. The results showed that the simulation of linear self motion became more realistic with the application of whole body tilt, as long as the tilt rate remained under the detection threshold of about 3 deg/s. This value is in close agreement with the empirical rate limit commonly used in flight simulation. The minimum required motion cue was inversely proportional to stimulus frequency, and increased with the amplitude of the visual displacement (rather than acceleration). As a consequence, the range of useful tilt stimuli became more critical with increasing stimulus frequency. We conclude that this psychophysical approach reveals valid parameters for motion driving algorithms used in motion base simulators.


Sign in / Sign up

Export Citation Format

Share Document