scholarly journals Dynamics of gaze control during prey capture in freely moving mice

Author(s):  
Angie M. Michaiel ◽  
Elliott T.T. Abe ◽  
Cristopher M. Niell

ABSTRACTMany studies of visual processing are conducted in unnatural conditions, such as head- and gaze-fixation. As this radically limits natural exploration of the visual environment, there is much less known about how animals actively use their sensory systems to acquire visual information in natural, goal-directed contexts. Recently, prey capture has emerged as an ethologically relevant behavior that mice perform without training, and that engages vision for accurate orienting and pursuit. However, it is unclear how mice target their gaze during such natural behaviors, particularly since, in contrast to many predatory species, mice have a narrow binocular field and lack foveate vision that would entail fixing their gaze on a specific point in the visual field. Here we measured head and bilateral eye movements in freely moving mice performing prey capture. We find that the majority of eye movements are compensatory for head movements, thereby acting to stabilize the visual scene. During head turns, however, these periods of stabilization are interspersed with non-compensatory saccades that abruptly shift gaze position. Analysis of eye movements relative to the cricket position shows that the saccades do not preferentially select a specific point in the visual scene. Rather, orienting movements are driven by the head, with the eyes following in coordination to sequentially stabilize and recenter the gaze. These findings help relate eye movements in the mouse to other species, and provide a foundation for studying active vision during ethological behaviors in the mouse.

eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Angie M Michaiel ◽  
Elliott TT Abe ◽  
Cristopher M Niell

Many studies of visual processing are conducted in constrained conditions such as head- and gaze-fixation, and therefore less is known about how animals actively acquire visual information in natural contexts. To determine how mice target their gaze during natural behavior, we measured head and bilateral eye movements in mice performing prey capture, an ethological behavior that engages vision. We found that the majority of eye movements are compensatory for head movements, thereby serving to stabilize the visual scene. During movement, however, periods of stabilization are interspersed with non-compensatory saccades that abruptly shift gaze position. Notably, these saccades do not preferentially target the prey location. Rather, orienting movements are driven by the head, with the eyes following in coordination to sequentially stabilize and recenter the gaze. These findings relate eye movements in the mouse to other species, and provide a foundation for studying active vision during ethological behaviors in the mouse.


2020 ◽  
Author(s):  
Han Zhang ◽  
Nicola C Anderson ◽  
Kevin Miller

Recent studies have shown that mind-wandering (MW) is associated with changes in eye movement parameters, but have not explored how MW affects the sequential pattern of eye movements involved in making sense of complex visual information. Eye movements naturally unfold over time and this process may reveal novel information about cognitive processing during MW. The current study used Recurrence Quantification Analysis (Anderson, Bischof, Laidlaw, Risko, & Kingstone, 2013) to describe the pattern of refixations (fixations directed to previously-inspected regions) during MW. Participants completed a real-world scene encoding task and responded to thought probes assessing intentional and unintentional MW. Both types of MW were associated with worse memory of the scenes. Importantly, RQA showed that scanpaths during unintentional MW were more repetitive than during on-task episodes, as indicated by a higher recurrence rate and more stereotypical fixation sequences. This increased repetitiveness suggests an adaptive response to processing failures through re-examining previous locations. Moreover, this increased repetitiveness contributed to fixations focusing on a smaller spatial scale of the stimuli. Finally, we were also able to validate several traditional measures: both intentional and unintentional MW were associated with fewer and longer fixations; Eye-blinking increased numerically during both types of MW but the difference was only significant for unintentional MW. Overall, the results advanced our understanding of how visual processing is affected during MW by highlighting the sequential aspect of eye movements.


Author(s):  
Arne F. Meyer ◽  
John O’Keefe ◽  
Jasper Poort

SummaryAnimals actively interact with their environment to gather sensory information. There is conflicting evidence about how mice use vision to sample their environment. During head restraint, mice make rapid eye movements strongly coupled between the eyes, similar to conjugate saccadic eye movements in humans. However, when mice are free to move their heads, eye movement patterns are more complex and often non-conjugate, with the eyes moving in opposite directions. Here, we combined eye tracking with head motion measurements in freely moving mice and found that both observations can be explained by the existence of two distinct types of coupling between eye and head movements. The first type comprised non-conjugate eye movements which systematically compensated for changes in head tilt to maintain approximately the same visual field relative to the horizontal ground plane. The second type of eye movements were conjugate and coupled to head yaw rotation to produce a “saccade and fixate” gaze pattern. During head initiated saccades, the eyes moved together in the same direction as the head, but during subsequent fixation moved in the opposite direction to the head to compensate for head rotation. This “saccade and fixate” pattern is similar to that seen in humans who use eye movements (with or without head movement) to rapidly shift gaze but in mice relies on combined eye and head movements. Indeed, the two types of eye movements very rarely occurred in the absence of head movements. Even in head-restrained mice, eye movements were invariably associated with attempted head motion. Both types of eye-head coupling were seen in freely moving mice during social interactions and a visually-guided object tracking task. Our results reveal that mice use a combination of head and eye movements to sample their environment and highlight the similarities and differences between eye movements in mice and humans.HighlightsTracking of eyes and head in freely moving mice reveals two types of eye-head couplingEye/head tilt coupling aligns gaze to horizontal planeRotational eye and head coupling produces a “saccade and fixate” gaze pattern with head leading the eyeBoth types of eye-head coupling are maintained during visually-guided behaviorsEye movements in head-restrained mice are related to attempted head movements


2021 ◽  
Vol 7 (30) ◽  
pp. eabf2218 ◽  
Author(s):  
Richard Schweitzer ◽  
Martin Rolfs

Rapid eye movements (saccades) incessantly shift objects across the retina. To establish object correspondence, the visual system is thought to match surface features of objects across saccades. Here, we show that an object’s intrasaccadic retinal trace—a signal previously considered unavailable to visual processing—facilitates this match making. Human observers made saccades to a cued target in a circular stimulus array. Using high-speed visual projection, we swiftly rotated this array during the eyes’ flight, displaying continuous intrasaccadic target motion. Observers’ saccades landed between the target and a distractor, prompting secondary saccades. Independently of the availability of object features, which we controlled tightly, target motion increased the rate and reduced the latency of gaze-correcting saccades to the initial presaccadic target, in particular when the target’s stimulus features incidentally gave rise to efficient motion streaks. These results suggest that intrasaccadic visual information informs the establishment of object correspondence and jump-starts gaze correction.


2019 ◽  
Author(s):  
Jack Lindsey ◽  
Samuel A. Ocko ◽  
Surya Ganguli ◽  
Stephane Deny

AbstractThe vertebrate visual system is hierarchically organized to process visual information in successive stages. Neural representations vary drastically across the first stages of visual processing: at the output of the retina, ganglion cell receptive fields (RFs) exhibit a clear antagonistic center-surround structure, whereas in the primary visual cortex (V1), typical RFs are sharply tuned to a precise orientation. There is currently no unified theory explaining these differences in representations across layers. Here, using a deep convolutional neural network trained on image recognition as a model of the visual system, we show that such differences in representation can emerge as a direct consequence of different neural resource constraints on the retinal and cortical networks, and for the first time we find a single model from which both geometries spontaneously emerge at the appropriate stages of visual processing. The key constraint is a reduced number of neurons at the retinal output, consistent with the anatomy of the optic nerve as a stringent bottleneck. Second, we find that, for simple downstream cortical networks, visual representations at the retinal output emerge as nonlinear and lossy feature detectors, whereas they emerge as linear and faithful encoders of the visual scene for more complex cortical networks. This result predicts that the retinas of small vertebrates (e.g. salamander, frog) should perform sophisticated nonlinear computations, extracting features directly relevant to behavior, whereas retinas of large animals such as primates should mostly encode the visual scene linearly and respond to a much broader range of stimuli. These predictions could reconcile the two seemingly incompatible views of the retina as either performing feature extraction or efficient coding of natural scenes, by suggesting that all vertebrates lie on a spectrum between these two objectives, depending on the degree of neural resources allocated to their visual system.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Enny H. van Beest ◽  
Sreedeep Mukherjee ◽  
Lisa Kirchberger ◽  
Ulf H. Schnabel ◽  
Chris van der Togt ◽  
...  

AbstractThe representation of space in mouse visual cortex was thought to be relatively uniform. Here we reveal, using population receptive-field (pRF) mapping techniques, that mouse visual cortex contains a region in which pRFs are considerably smaller. This region, the “focea,” represents a location in space in front of, and slightly above, the mouse. Using two-photon imaging we show that the smaller pRFs are due to lower scatter of receptive-fields at the focea and an over-representation of binocular regions of space. We show that receptive-fields of single-neurons in areas LM and AL are smaller at the focea and that mice have improved visual resolution in this region of space. Furthermore, freely moving mice make compensatory eye-movements to hold this region in front of them. Our results indicate that mice have spatial biases in their visual processing, a finding that has important implications for the use of the mouse model of vision.


2020 ◽  
Author(s):  
Nicholas Sattler ◽  
Michael Wehr

AbstractAdvances in the ability to monitor freely-moving mice may prove valuable for the study of behavior and its neural correlates. Here we describe a head-mounted multi-camera system for mice, comprised of inexpensive miniature analog camera modules. We illustrate the use of this system with several natural behaviors including prey capture, courtship, jumping, and exploration. With a four-camera headset, monitoring the eyes, ears, whiskers, rhinarium, and binocular visual field can all be achieved simultaneously with high-density electrophysiology. With appropriate focus and positioning, all eye movements can be captured, including cyclotorsion. For studies of vision and eye movements, cyclotorsion provides the final degree of freedom required to reconstruct the visual scene in retinotopic coordinates or to investigate the vestibulo-ocular reflex in mice. Altogether, this system allows for comprehensive measurement of freely-moving mouse behavior, enabling a more holistic and multimodal approach to investigate ethological behaviors and other processes of active perception.


2021 ◽  
Vol 125 (5) ◽  
pp. 1552-1576
Author(s):  
David Souto ◽  
Dirk Kerzel

People’s eyes are directed at objects of interest with the aim of acquiring visual information. However, processing this information is constrained in capacity, requiring task-driven and salience-driven attentional mechanisms to select few among the many available objects. A wealth of behavioral and neurophysiological evidence has demonstrated that visual selection and the motor selection of saccade targets rely on shared mechanisms. This coupling supports the premotor theory of visual attention put forth more than 30 years ago, postulating visual selection as a necessary stage in motor selection. In this review, we examine to which extent the coupling of visual and motor selection observed with saccades is replicated during ocular tracking. Ocular tracking combines catch-up saccades and smooth pursuit to foveate a moving object. We find evidence that ocular tracking requires visual selection of the speed and direction of the moving target, but the position of the motion signal may not coincide with the position of the pursuit target. Further, visual and motor selection can be spatially decoupled when pursuit is initiated (open-loop pursuit). We propose that a main function of coupled visual and motor selection is to serve the coordination of catch-up saccades and pursuit eye movements. A simple race-to-threshold model is proposed to explain the variable coupling of visual selection during pursuit, catch-up and regular saccades, while generating testable predictions. We discuss pending issues, such as disentangling visual selection from preattentive visual processing and response selection, and the pinpointing of visual selection mechanisms, which have begun to be addressed in the neurophysiological literature.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Carl D Holmgren ◽  
Paul Stahr ◽  
Damian J Wallace ◽  
Kay-Michael Voit ◽  
Emily J Matheson ◽  
...  

Mice have a large visual field that is constantly stabilized by vestibular ocular reflex (VOR) driven eye rotations that counter head-rotations. While maintaining their extensive visual coverage is advantageous for predator detection, mice also track and capture prey using vision. However, in the freely moving animal quantifying object location in the field of view is challenging. Here, we developed a method to digitally reconstruct and quantify the visual scene of freely moving mice performing a visually based prey capture task. By isolating the visual sense and combining a mouse eye optic model with the head and eye rotations, the detailed reconstruction of the digital environment and retinal features were projected onto the corneal surface for comparison, and updated throughout the behavior. By quantifying the spatial location of objects in the visual scene and their motion throughout the behavior, we show that the prey image consistently falls within a small area of the VOR-stabilized visual field. This functional focus coincides with the region of minimal optic flow within the visual field and consequently area of minimal motion-induced image-blur, as during pursuit mice ran directly toward the prey. The functional focus lies in the upper-temporal part of the retina and coincides with the reported high density-region of Alpha-ON sustained retinal ganglion cells.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Tatiana Malevich ◽  
Antimo Buonocore ◽  
Ziad M Hafed

The eyes are never still during maintained gaze fixation. When microsaccades are not occurring, ocular position exhibits continuous slow changes, often referred to as drifts. Unlike microsaccades, drifts remain to be viewed as largely random eye movements. Here we found that ocular position drifts can, instead, be very systematically stimulus-driven, and with very short latencies. We used highly precise eye tracking in three well trained macaque monkeys and found that even fleeting (~8 ms duration) stimulus presentations can robustly trigger transient and stimulus-specific modulations of ocular position drifts, and with only approximately 60 ms latency. Such drift responses are binocular, and they are most effectively elicited with large stimuli of low spatial frequency. Intriguingly, the drift responses exhibit some image pattern selectivity, and they are not explained by convergence responses, pupil constrictions, head movements, or starting eye positions. Ocular position drifts have very rapid access to exogenous visual information.


Sign in / Sign up

Export Citation Format

Share Document