retinal motion
Recently Published Documents


TOTAL DOCUMENTS

88
(FIVE YEARS 17)

H-INDEX

19
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Fatemeh Molaei Vaneghi ◽  
Natalia Zaretskaya ◽  
Tim van Mourik ◽  
Jonas Bause ◽  
Klaus Scheffler ◽  
...  

Neural mechanisms underlying a stable perception of the world during pursuit eye movements are not fully understood. Both, perceptual stability as well as perception of real (i.e. objective) motion are the product of integration between motion signals on the retina and efference copies of eye movements. Human areas V3A and V6 have previously been shown to have strong objective ('real') motion responses. Here we used high-resolution laminar fMRI at ultra-high magnetic field (9.4T) in human subjects to examine motion integration across cortical depths in these areas. We found an increased preference for objective motion in areas V3A and V6+ i.e. V6 and possibly V6A towards the upper layers. When laminar responses were detrended to remove the upper-layer bias present in all responses, we found a unique, condition-specific laminar profile in V6+, showing reduced mid-layer responses for retinal motion only. The results provide evidence for differential, motion-type dependent laminar processing in area V6+. Mechanistically, the mid-layer dip suggests a special contribution of retinal motion to integration, either in the form of a subtractive (inhibitory) mid-layer input, or in the form of feedback into extragranular or infragranular layers. The results show that differential laminar signals can be measured in high-level motion areas in human occipitoparietal cortex, opening the prospect of new mechanistic insights using non-invasive brain imaging.


2021 ◽  
Author(s):  
David Souto ◽  
Kyle Nacilla ◽  
Mateusz Bocian

Smooth pursuit eye movements can anticipate predictable movement patterns, thus achieving their goal of reducing retinal motion blur. Oculomotor predictions have been thought to rely on an internal model of the target kinematics. Since biological motion is one of the most important visual stimuli in regulating human interaction, we asked whether there is a specific contribution of an internal model of biological motion in driving pursuit eye movements. Unlike previous contributions, we exploited the cyclical nature of walking to measure eye movement’s ability to track the velocity oscillations of the hip of point-light walkers. We quantified the quality of tracking by cross-correlating pursuit and hip velocity oscillations. We found a robust correlation between signals, even along the horizontal dimension, where changes in velocity during the stepping cycle are very subtle. The inversion of the walker and the presentation of the hip-dot without context incurred the same additional phase lag along the horizontal dimension, whereas a scrambled walker incurred no phase lag relative to the upright walker. Those findings support the view that local information beyond the hip-dot, but not necessarily configural information, contribute to predicting the hip kinematics that control pursuit. We also found a smaller phase lag in inverted walkers for pursuit along the vertical dimension compared to upright and scrambled walkers, indicating that inversion does not simply reduce prediction. We show that pursuit eye movements provide an implicit and robust measure of the processing of biological motion signals.


2021 ◽  
Author(s):  
Zhe-Xin Xu ◽  
Gregory C DeAngelis

There are two distinct sources of retinal image motion: motion of objects in the world and movement of the observer. In cases where an object moves in a scene and the eyes also move, a coordinate transformation that involves smooth eye movements and retinal motion will be needed in order to estimate object motion in world coordinates. More recently, interactions between retinal and eye velocity signals have also been suggested to generate depth selectivity from motion parallax (MP) in the macaque middle temporal (MT) area. We explored whether the nature of the interaction between eye and retinal velocities in MT neurons favors one of these two possibilities or a mixture of both. We analyzed responses of MT neurons to retinal and eye velocities in a viewing context in which the observer translates laterally while maintaining visual fixation on a world-fixed target. In this scenario, the depth of an object can be inferred from the ratio between retinal velocity and eye velocity, according to the motion-pursuit law. Previous studies have shown that MT responses to retinal motion are gain-modulated by the direction of eye movement, suggesting a potential mechanism for depth tuning from MP. However, our analysis of the joint tuning profile for retinal and eye velocities reveals that some MT neurons show a partial coordinate transformation toward head coordinates. We formalized a series of computational models to predict neural spike trains as well as selectivity for depth, and we used factorial model comparisons to quantify the relative importance of each model component. Our findings for many MT neurons reveal that the data are equally well explained by gain modulation or a partial coordinate transformation toward head coordinates, although some responses can only be well fit by the coordinate transform model. Our results highlight the potential role of MT neurons in representing multiple higher-level sensory variables, including depth from MP and object motion in the world.


2021 ◽  
Author(s):  
Rune Nguyen Rasmussen ◽  
Akihiro Matsumoto ◽  
Simon Arvin ◽  
Keisuke Yonehara

2020 ◽  
Vol 20 (11) ◽  
pp. 799
Author(s):  
Karl Muller ◽  
Kate Bonnen ◽  
Jonathan Matthis ◽  
Alexander Huk ◽  
Mary Hayhoe
Keyword(s):  

2020 ◽  
Author(s):  
Rune N. Rasmussen ◽  
Akihiro Matsumoto ◽  
Simon Arvin ◽  
Keisuke Yonehara

AbstractLocomotion creates various patterns of optic flow on the retina, which provide the observer with information about their movement relative to the environment. However, it is unclear how these optic flow patterns are encoded by the cortex. Here we use two-photon calcium imaging in awake mice to systematically map monocular and binocular responses to horizontal motion in four areas of the visual cortex. We find that neurons selective to translational or rotational optic flow are abundant in higher visual areas, whereas neurons suppressed by binocular motion are more common in the primary visual cortex. Disruption of retinal direction selectivity in Frmd7 mutant mice reduces the number of translation-selective neurons in the primary visual cortex, and translation- and rotation-selective neurons as well as binocular direction-selective neurons in the rostrolateral and anterior visual cortex, blurring the functional distinction between primary and higher visual areas. Thus, optic flow representations in specific areas of the visual cortex rely on binocular integration of motion information from the retina.


2019 ◽  
Vol 82 (2) ◽  
pp. 533-549 ◽  
Author(s):  
Josephine Reuther ◽  
Ramakrishna Chakravarthi ◽  
Amelia R. Hunt

AbstractFeature integration theory proposes that visual features, such as shape and color, can only be combined into a unified object when spatial attention is directed to their location in retinotopic maps. Eye movements cause dramatic changes on our retinae, and are associated with obligatory shifts in spatial attention. In two experiments, we measured the prevalence of conjunction errors (that is, reporting an object as having an attribute that belonged to another object), for brief stimulus presentation before, during, and after a saccade. Planning and executing a saccade did not itself disrupt feature integration. Motion did disrupt feature integration, leading to an increase in conjunction errors. However, retinal motion of an equal extent but caused by saccadic eye movements is spared this disruption, and showed similar rates of conjunction errors as a condition with static stimuli presented to a static eye. The results suggest that extra-retinal signals are able to compensate for the motion caused by saccadic eye movements, thereby preserving the integrity of objects across saccades and preventing their features from mixing or mis-binding.


2019 ◽  
Vol 122 (4) ◽  
pp. 1555-1565 ◽  
Author(s):  
Alessandro Moscatelli ◽  
Cecile R. Scotto ◽  
Marc O. Ernst

In vision, the perceived velocity of a moving stimulus differs depending on whether we pursue it with the eyes or not: A stimulus moving across the retina with the eyes stationary is perceived as being faster compared with a stimulus of the same physical speed that the observer pursues with the eyes, while its retinal motion is zero. This effect is known as the Aubert–Fleischl phenomenon. Here, we describe an analog phenomenon in touch. We asked participants to estimate the speed of a moving stimulus either from tactile motion only (i.e., motion across the skin), while keeping the hand world stationary, or from kinesthesia only by tracking the stimulus with a guided arm movement, such that the tactile motion on the finger was zero (i.e., only finger motion but no movement across the skin). Participants overestimated the velocity of the stimulus determined from tactile motion compared with kinesthesia in analogy with the visual Aubert–Fleischl phenomenon. In two follow-up experiments, we manipulated the stimulus noise by changing the texture of the touched surface. Similarly to the visual phenomenon, this significantly affected the strength of the illusion. This study supports the hypothesis of shared computations for motion processing between vision and touch. NEW & NOTEWORTHY In vision, the perceived velocity of a moving stimulus is different depending on whether we pursue it with the eyes or not, an effect known as the Aubert–Fleischl phenomenon. We describe an analog phenomenon in touch. We asked participants to estimate the speed of a moving stimulus either from tactile motion or by pursuing it with the hand. Participants overestimated the stimulus velocity measured from tactile motion compared with kinesthesia, in analogy with the visual Aubert–Fleischl phenomenon.


2019 ◽  
Vol 29 (19) ◽  
pp. 3277-3288.e5 ◽  
Author(s):  
Akihiro Matsumoto ◽  
Kevin L. Briggman ◽  
Keisuke Yonehara

Sign in / Sign up

Export Citation Format

Share Document