scholarly journals Motion regions are modulated by scene content

2018 ◽  
Author(s):  
Didem Korkmaz Hacialihafiz ◽  
Andreas Bartels

AbstractCreating a stable perception of the world during pursuit eye movements is one of the everyday roles of visual system. Some motion regions have been shown to differentiate between motion in the external world from that generated by eye movements. However, in most circumstances, perceptual stability is consistently related to content: the surrounding scene is typically stable. However, no prior study has examined to which extent motion responsive regions are modulated by scene content, and whether there is an interaction between content and motion response. In the present study we used a factorial design that has previously been shown to reveal regional involvement in integrating efference copies of eye-movements with retinal motion to mediate perceptual stability and encode real-world motion. We then added scene content as a third factor, which allowed us to examine to which extent real-motion, retinal motion, and static responses were modulated by meaningful scenes versus their Fourier scrambled counterpart. We found that motion responses in human motion responsive regions V3A, V6, V5+/MT+ and cingulate sulcus visual area (CSv) were all modulated by scene content. Depending on the region, these motion-content interactions differentially depended on whether motion was self-induced or not. V3A was the only motion responsive region that also showed responses to still scenes. Our results suggest that contrary to the two-pathway hypothesis, scene responses are not isolated to ventral regions, but also can be found in dorsal areas.


2021 ◽  
Author(s):  
Fatemeh Molaei Vaneghi ◽  
Natalia Zaretskaya ◽  
Tim van Mourik ◽  
Jonas Bause ◽  
Klaus Scheffler ◽  
...  

Neural mechanisms underlying a stable perception of the world during pursuit eye movements are not fully understood. Both, perceptual stability as well as perception of real (i.e. objective) motion are the product of integration between motion signals on the retina and efference copies of eye movements. Human areas V3A and V6 have previously been shown to have strong objective ('real') motion responses. Here we used high-resolution laminar fMRI at ultra-high magnetic field (9.4T) in human subjects to examine motion integration across cortical depths in these areas. We found an increased preference for objective motion in areas V3A and V6+ i.e. V6 and possibly V6A towards the upper layers. When laminar responses were detrended to remove the upper-layer bias present in all responses, we found a unique, condition-specific laminar profile in V6+, showing reduced mid-layer responses for retinal motion only. The results provide evidence for differential, motion-type dependent laminar processing in area V6+. Mechanistically, the mid-layer dip suggests a special contribution of retinal motion to integration, either in the form of a subtractive (inhibitory) mid-layer input, or in the form of feedback into extragranular or infragranular layers. The results show that differential laminar signals can be measured in high-level motion areas in human occipitoparietal cortex, opening the prospect of new mechanistic insights using non-invasive brain imaging.



1999 ◽  
Vol 88 (3) ◽  
pp. 209-219 ◽  
Author(s):  
Gunvant K. Thaker ◽  
David E. Ross ◽  
Robert W. Buchanan ◽  
Helene M. Adami ◽  
Deborah R. Medoff


2008 ◽  
Vol 29 (3) ◽  
pp. 300-311 ◽  
Author(s):  
Maja U. Trenner ◽  
Manfred Fahle ◽  
Oliver Fasold ◽  
Hauke R. Heekeren ◽  
Arno Villringer ◽  
...  


2011 ◽  
Vol 106 (2) ◽  
pp. 741-753 ◽  
Author(s):  
Yu-Qiong Niu ◽  
Stephen G. Lisberger

We have investigated how visual motion signals are integrated for smooth pursuit eye movements by measuring the initiation of pursuit in monkeys for pairs of moving stimuli of the same or differing luminance. The initiation of pursuit for pairs of stimuli of the same luminance could be accounted for as a vector average of the responses to the two stimuli singly. When stimuli comprised two superimposed patches of moving dot textures, the brighter stimulus suppressed the inputs from the dimmer stimulus, so that the initiation of pursuit became winner-take-all when the luminance ratio of the two stimuli was 8 or greater. The dominance of the brighter stimulus could be not attributed to either the latency difference or the ratio of the eye accelerations for the bright and dim stimuli presented singly. When stimuli comprised either spot targets or two patches of dots moving across separate locations in the visual field, the brighter stimulus had a much weaker suppressive influence; the initiation of pursuit could be accounted for by nearly equal vector averaging of the responses to the two stimuli singly. The suppressive effects of the brighter stimulus also appeared in human perceptual judgments, but again only for superimposed stimuli. We conclude that one locus of the interaction of two moving visual stimuli is shared by perception and action and resides in local inhibitory connections in the visual cortex. A second locus resides deeper in sensory-motor processing and may be more closely related to action selection than to stimulus selection.



2015 ◽  
Vol 35 (22) ◽  
pp. 8515-8530 ◽  
Author(s):  
Trishna Mukherjee ◽  
Matthew Battifarano ◽  
Claudio Simoncini ◽  
Leslie C. Osborne


2015 ◽  
Vol 113 (5) ◽  
pp. 1377-1399 ◽  
Author(s):  
T. Scott Murdison ◽  
Guillaume Leclercq ◽  
Philippe Lefèvre ◽  
Gunnar Blohm

Smooth pursuit eye movements are driven by retinal motion and enable us to view moving targets with high acuity. Complicating the generation of these movements is the fact that different eye and head rotations can produce different retinal stimuli but giving rise to identical smooth pursuit trajectories. However, because our eyes accurately pursue targets regardless of eye and head orientation (Blohm G, Lefèvre P. J Neurophysiol 104: 2103–2115, 2010), the brain must somehow take these signals into account. To learn about the neural mechanisms potentially underlying this visual-to-motor transformation, we trained a physiologically inspired neural network model to combine two-dimensional (2D) retinal motion signals with three-dimensional (3D) eye and head orientation and velocity signals to generate a spatially correct 3D pursuit command. We then simulated conditions of 1) head roll-induced ocular counterroll, 2) oblique gaze-induced retinal rotations, 3) eccentric gazes (invoking the half-angle rule), and 4) optokinetic nystagmus to investigate how units in the intermediate layers of the network accounted for different 3D constraints. Simultaneously, we simulated electrophysiological recordings (visual and motor tunings) and microstimulation experiments to quantify the reference frames of signals at each processing stage. We found a gradual retinal-to-intermediate-to-spatial feedforward transformation through the hidden layers. Our model is the first to describe the general 3D transformation for smooth pursuit mediated by eye- and head-dependent gain modulation. Based on several testable experimental predictions, our model provides a mechanism by which the brain could perform the 3D visuomotor transformation for smooth pursuit.



2017 ◽  
Author(s):  
Didem Korkmaz Hacialihafiz ◽  
Andreas Bartels

AbstractWe perceive scenes as stable even when eye movements induce retinal motion, for example during pursuit of a moving object. Mechanisms mediating perceptual stability have primarily been examined in motion regions of the dorsal visual pathway. Here we examined whether motion responses in human scene regions are encoded in eye- or world centered reference frames. We recorded brain responses in human participants using fMRI while they performed a well-controlled visual pursuit paradigm previously used to examine dorsal motion regions. In addition, we examined effects of content by using either natural scenes or their Fourier scrambles. We found that parahippocampal place area (PPA) responded to motion only in world- but not in eye-centered coordinates, regardless of scene content. The occipital place area (OPA) responded to both, objective and retinal motion equally, and retrosplenial cortex (RSC) had no motion responses but responded to pursuit. Only PPA’s objective motion responses were higher during scenes than scrambled images, although there was a similar trend in OPA. These results indicate a special role of PPA in representing its content in real-world coordinates. Our results question a strict subdivision of dorsal “what” and ventral “where” streams, and suggest a role of PPA in contributing to perceptual stability.



2018 ◽  
Author(s):  
Didem Korkmaz Hacialihafiz ◽  
Andreas Bartels

AbstractMotion signals can arise for two reasons in the retina: due to self-motion or due to real motion in the environment. Prior studies on speed tuning always measured joint responses to real and retinal motion, and for some of the more recently identified human motion processing regions, speed tuning has not been examined in at all. We localized motion regions V3A, V6, V5/MT, MST and cingulate sulcus visual area (CSv) in 20 human participants, and then measured their responses to motion velocities from 1-24 degrees per second. Importantly, we used a pursuit paradigm that allowed us to quantify responses to objective and retinal motion separately. In order to provide optimal stimulation, we used stimuli with natural image statistics derived from Fourier scrambles of natural images. The results show that all regions increased responses with higher speeds for both, retinal and objective motion. V3A stood out in that it was the only region whose slope of the speed-response function for objective motion was higher than that for retinal motion. V6, V5/MT, MST and CSv did not differ in objective and retinal speed slopes, even though V5/MT and MST tended to respond more to objective motion at all speeds. These results reveal highly similar speed tuning functions for early and high-level motion regions, and support the view that human V3A encodes primarily objective rather than retinal motion signals.



2000 ◽  
Vol 59 (2) ◽  
pp. 108-114 ◽  
Author(s):  
Kazuo Koga

Evidence is presented that eye movements have a strong modulation effect on perceived motion of an object in an induced motion situation. It was investigated whether pursuit eye movements affect motion perception, particularly target velocity perception, under the following stimulus conditions: (1) laterally moving objects on the computer display, (2) recurrent simple target motion and, (3) a unilaterally scrolling grid. The observers' eye movements were recorded and, at the same time, their responses with respect to their velocity perception were registered and analyzed in synchronization with the eye movement data. In most cases, when pursuit eye movements were synchronized with the movement of the target, the velocity of the target was judged to be slow or motionless. An explanation of the results is presented which is based on two sources of motion information: (1) A displacement detector in terms of retinal coordinates, and (2) a proprioceptive sensing unit associated with the eye movements. The veridicality of the judgments of the velocity of the object motion was determined by the complexity of the processes for integrating the signals from the two channels.



Sign in / Sign up

Export Citation Format

Share Document