scholarly journals Look where you go: characterizing eye movements toward optic flow

2020 ◽  
Author(s):  
Hiu Mei Chow ◽  
Jonas Knöll ◽  
Matthew Madsen ◽  
Miriam Spering

AbstractWhen we move through our environment, objects in the visual scene create optic flow patterns on the retina. Even though optic flow is ubiquitous in everyday life, it is not well understood how our eyes naturally respond to it. In small groups of human and non-human primates, optic flow triggers intuitive, uninstructed eye movements to the pattern’s focus of expansion (Knöll, Pillow & Huk, 2018). Here we investigate whether such intuitive oculomotor responses to optic flow are generalizable to a larger group of human observers, and how eye movements are affected by motion signal strength and task instructions. Observers (n = 43) viewed expanding or contracting optic flow constructed by a cloud of moving dots radiating from or converging toward a focus of expansion that could randomly shift. Results show that 84% of observers tracked the focus of expansion with their eyes without being explicitly instructed to track. Intuitive tracking was tuned to motion signal strength: saccades landed closer to the focus of expansion and smooth tracking was more accurate when dot contrast, motion coherence, and translational speed were high. Under explicit tracking instruction, the eyes aligned with the focus of expansion more closely than without instruction. Our results highlight the sensitivity of intuitive eye movements as indicators of visual motion processing in dynamic contexts.

1988 ◽  
Vol 60 (3) ◽  
pp. 940-965 ◽  
Author(s):  
M. R. Dursteler ◽  
R. H. Wurtz

1. Previous experiments have shown that punctate chemical lesions within the middle temporal area (MT) of the superior temporal sulcus (STS) produce deficits in the initiation and maintenance of pursuit eye movements (10, 34). The present experiments were designed to test the effect of such chemical lesions in an area within the STS to which MT projects, the medial superior temporal area (MST). 2. We injected ibotenic acid into localized regions of MST, and we observed two deficits in pursuit eye movements, a retinotopic deficit and a directional deficit. 3. The retinotopic deficit in pursuit initiation was characterized by the monkey's inability to match eye speed to target speed or to adjust the amplitude of the saccade made to acquire the target to compensate for target motion. This deficit was related to the initiation of pursuit to targets moving in any direction in the visual field contralateral to the side of the brain with the lesion. This deficit was similar to the deficit we found following damage to extrafoveal MT except that the affected area of the visual field frequently extended throughout the entire contralateral visual field tested. 4. The directional deficit in pursuit maintenance was characterized by a failure to match eye speed to target speed once the fovea had been brought near the moving target. This deficit occurred only when the target was moving toward the side of the lesion, regardless of whether the target began to move in the ipsilateral or contralateral visual field. There was no deficit in the amplitude of saccades made to acquire the target, or in the amplitude of the catch-up saccades made to compensate for the slowed pursuit. The directional deficit is similar to the one we described previously following chemical lesions of the foveal representation in the STS. 5. Retinotopic deficits resulted from any of our injections in MST. Directional deficits resulted from lesions limited to subregions within MST, particularly lesions that invaded the floor of the STS and the posterior bank of the STS just lateral to MT. Extensive damage to the densely myelinated area of the anterior bank or to the posterior parietal area on the dorsal lip of the anterior bank produced minimal directional deficits. 6. We conclude that damage to visual motion processing in MST underlies the retinotopic pursuit deficit just as it does in MT. MST appears to be a sequential step in visual motion processing that occurs before all of the visual motion information is transmitted to the brainstem areas related to pursuit.(ABSTRACT TRUNCATED AT 400 WORDS)


2008 ◽  
Vol 99 (5) ◽  
pp. 2329-2346 ◽  
Author(s):  
Ryusuke Hayashi ◽  
Kenichiro Miura ◽  
Hiromitsu Tabata ◽  
Kenji Kawano

Brief movements of a large-field visual stimulus elicit short-latency tracking eye movements termed “ocular following responses” (OFRs). To address the question of whether OFRs can be elicited by purely binocular motion signals in the absence of monocular motion cues, we measured OFRs from monkeys using dichoptic motion stimuli, the monocular inputs of which were flickering gratings in spatiotemporal quadrature, and compared them with OFRs to standard motion stimuli including monocular motion cues. Dichoptic motion did elicit OFRs, although with longer latencies and smaller amplitudes. In contrast to these findings, we observed that other types of motion stimuli categorized as non-first-order motion, which is undetectable by detectors for standard luminance-defined (first-order) motion, did not elicit OFRs, although they did evoke the sensation of motion. These results indicate that OFRs can be driven solely by cortical visual motion processing after binocular integration, which is distinct from the process incorporating non-first-order motion for elaborated motion perception. To explore the nature of dichoptic motion processing in terms of interaction with monocular motion processing, we further recorded OFRs from both humans and monkeys using our novel motion stimuli, the monocular and dichoptic motion signals of which move in opposite directions with a variable motion intensity ratio. We found that monocular and dichoptic motion signals are processed in parallel to elicit OFRs, rather than suppressing each other in a winner-take-all fashion, and the results were consistent across the species.


1997 ◽  
Vol 14 (2) ◽  
pp. 323-338 ◽  
Author(s):  
Vincent P. Ferrera ◽  
Stephen G. Lisberger

AbstractAs a step toward understanding the mechanism by which targets are selected for smooth-pursuit eye movements, we examined the behavior of the pursuit system when monkeys were presented with two discrete moving visual targets. Two rhesus monkeys were trained to select a small moving target identified by its color in the presence of a moving distractor of another color. Smooth-pursuit eye movements were quantified in terms of the latency of the eye movement and the initial eye acceleration profile. We have previously shown that the latency of smooth pursuit, which is normally around 100 ms, can be extended to 150 ms or shortened to 85 ms depending on whether there is a distractor moving in the opposite or same direction, respectively, relative to the direction of the target. We have now measured this effect for a 360 deg range of distractor directions, and distractor speeds of 5–45 deg/s. We have also examined the effect of varying the spatial separation and temporal asynchrony between target and distractor. The results indicate that the effect of the distractor on the latency of pursuit depends on its direction of motion, and its spatial and temporal proximity to the target, but depends very little on the speed of the distractor. Furthermore, under the conditions of these experiments, the direction of the eye movement that is emitted in response to two competing moving stimuli is not a vectorial combination of the stimulus motions, but is solely determined by the direction of the target. The results are consistent with a competitive model for smooth-pursuit target selection and suggest that the competition takes place at a stage of the pursuit pathway that is between visual-motion processing and motor-response preparation.


2021 ◽  
Author(s):  
Scott T. Steinmetz ◽  
Oliver W. Layton ◽  
Nate V. Powell ◽  
Brett Fajen

This paper introduces a self-tuning mechanism for capturing rapid adaptation to changing visual stimuli by a population of neurons. Building upon the principles of efficient sensory encoding, we show how neural tuning curve parameters can be continually updated to optimally encode a time-varying distribution of recently detected stimulus values. We implemented this mechanism in a neural model that produces human-like estimates of self-motion direction (i.e., heading) based on optic flow. The parameters of speed-sensitive units were dynamically tuned in accordance with efficient sensory encoding such that the network remained sensitive as the distribution of optic flow speeds varied. In two simulation experiments, we found that model performance with dynamic tuning yielded more accurate, shorter latency heading estimates compared to the model with static tuning. We conclude that dynamic efficient sensory encoding offers a plausible approach for capturing adaptation to varying visual environments in biological visual systems and neural models alike.


2018 ◽  
Vol 4 (1) ◽  
pp. 501-523 ◽  
Author(s):  
Shin'ya Nishida ◽  
Takahiro Kawabe ◽  
Masataka Sawayama ◽  
Taiki Fukiage

Visual motion processing can be conceptually divided into two levels. In the lower level, local motion signals are detected by spatiotemporal-frequency-selective sensors and then integrated into a motion vector flow. Although the model based on V1-MT physiology provides a good computational framework for this level of processing, it needs to be updated to fully explain psychophysical findings about motion perception, including complex motion signal interactions in the spatiotemporal-frequency and space domains. In the higher level, the velocity map is interpreted. Although there are many motion interpretation processes, we highlight the recent progress in research on the perception of material (e.g., specular reflection, liquid viscosity) and on animacy perception. We then consider possible linking mechanisms of the two levels and propose intrinsic flow decomposition as the key problem. To provide insights into computational mechanisms of motion perception, in addition to psychophysics and neurosciences, we review machine vision studies seeking to solve similar problems.


2020 ◽  
Vol 117 (50) ◽  
pp. 32165-32168
Author(s):  
Arvid Guterstam ◽  
Michael S. A. Graziano

Recent evidence suggests a link between visual motion processing and social cognition. When person A watches person B, the brain of A apparently generates a fictitious, subthreshold motion signal streaming from B to the object of B’s attention. These previous studies, being correlative, were unable to establish any functional role for the false motion signals. Here, we directly tested whether subthreshold motion processing plays a role in judging the attention of others. We asked, if we contaminate people’s visual input with a subthreshold motion signal streaming from an agent to an object, can we manipulate people’s judgments about that agent’s attention? Participants viewed a display including faces, objects, and a subthreshold motion hidden in the background. Participants’ judgments of the attentional state of the faces was significantly altered by the hidden motion signal. Faces from which subthreshold motion was streaming toward an object were judged as paying more attention to the object. Control experiments showed the effect was specific to the agent-to-object motion direction and to judging attention, not action or spatial orientation. These results suggest that when the brain models other minds, it uses a subthreshold motion signal, streaming from an individual to an object, to help represent attentional state. This type of social-cognitive model, tapping perceptual mechanisms that evolved to process physical events in the real world, may help to explain the extraordinary cultural persistence of beliefs in mind processes having physical manifestation. These findings, therefore, may have larger implications for human psychology and cultural belief.


1988 ◽  
Vol 60 (2) ◽  
pp. 580-603 ◽  
Author(s):  
H. Komatsu ◽  
R. H. Wurtz

1. Among the multiple extrastriate visual areas in monkey cerebral cortex, several areas within the superior temporal sulcus (STS) are selectively related to visual motion processing. In this series of experiments we have attempted to relate this visual motion processing at a neuronal level to a behavior that is dependent on such processing, the generation of smooth-pursuit eye movements. 2. We studied two visual areas within the STS, the middle temporal area (MT) and the medial superior temporal area (MST). For the purposes of this study, MT and MST were defined functionally as those areas within the STS having a high proportion of directionally selective neurons. MST was distinguished from MT by using the established relationship of receptive-field size to eccentricity, with MST having larger receptive fields than MT. 3. A subset of these visually responsive cells within the STS were identified as pursuit cells--those cells that discharge during smooth pursuit of a small target in an otherwise dark room. Pursuit cells were found only in localized regions--in the foveal region of MT (MTf), in a dorsal-medial area of MST on the anterior bank of the STS (MSTd), and in a lateral-anterior area of MST on the floor and the posterior bank of the STS (MST1). 4. Pursuit cells showed two characteristics in common when their visual properties were studied while the monkey was fixating. Almost all cells showed direction selectivity for moving stimuli and included the fovea within their receptive fields. 5. The visual response of pursuit cells in the several areas differed in two ways. Cells in MTf preferred small moving spots of light, whereas cells in MSTd preferred large moving stimuli, such as a pattern of random dots. Cells in MTf had small receptive fields; those in MSTd usually had large receptive fields. Visual responses of pursuit neurons in MST1 were heterogeneous; some resembled those in MTf, whereas others were similar to those in MSTd. This suggests that the pursuit cells in MSTd and MST1 belong to different subregions of MST.


Neuron ◽  
2009 ◽  
Vol 62 (5) ◽  
pp. 717-732 ◽  
Author(s):  
Natsuko Shichinohe ◽  
Teppei Akao ◽  
Sergei Kurkin ◽  
Junko Fukushima ◽  
Chris R.S. Kaneko ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document