scholarly journals Two distinct types of eye-head coupling in freely moving mice

Author(s):  
Arne F. Meyer ◽  
John O’Keefe ◽  
Jasper Poort

SummaryAnimals actively interact with their environment to gather sensory information. There is conflicting evidence about how mice use vision to sample their environment. During head restraint, mice make rapid eye movements strongly coupled between the eyes, similar to conjugate saccadic eye movements in humans. However, when mice are free to move their heads, eye movement patterns are more complex and often non-conjugate, with the eyes moving in opposite directions. Here, we combined eye tracking with head motion measurements in freely moving mice and found that both observations can be explained by the existence of two distinct types of coupling between eye and head movements. The first type comprised non-conjugate eye movements which systematically compensated for changes in head tilt to maintain approximately the same visual field relative to the horizontal ground plane. The second type of eye movements were conjugate and coupled to head yaw rotation to produce a “saccade and fixate” gaze pattern. During head initiated saccades, the eyes moved together in the same direction as the head, but during subsequent fixation moved in the opposite direction to the head to compensate for head rotation. This “saccade and fixate” pattern is similar to that seen in humans who use eye movements (with or without head movement) to rapidly shift gaze but in mice relies on combined eye and head movements. Indeed, the two types of eye movements very rarely occurred in the absence of head movements. Even in head-restrained mice, eye movements were invariably associated with attempted head motion. Both types of eye-head coupling were seen in freely moving mice during social interactions and a visually-guided object tracking task. Our results reveal that mice use a combination of head and eye movements to sample their environment and highlight the similarities and differences between eye movements in mice and humans.HighlightsTracking of eyes and head in freely moving mice reveals two types of eye-head couplingEye/head tilt coupling aligns gaze to horizontal planeRotational eye and head coupling produces a “saccade and fixate” gaze pattern with head leading the eyeBoth types of eye-head coupling are maintained during visually-guided behaviorsEye movements in head-restrained mice are related to attempted head movements

2020 ◽  
Author(s):  
Walter F. Bischof ◽  
Nicola C Anderson ◽  
Michael T. Doswell ◽  
Alan Kingstone

How do we explore the visual environment around us, and how are head and eye movements coordinated during our exploration? To investigate this question, we had observers look at omni-directional panoramic scenes, composed of both landscape and fractal images, using a virtual-reality (VR) viewer while their eye and head movements were tracked. We analyzed the spatial distribution of eye fixations and the distribution of saccade directions; the spatial distribution of head positions and the distribution of head shifts; as well as the relation between eye and head movements. The results show that, for landscape scenes, eye and head behaviour best fit the allocentric frame defined by the scene horizon, especially when head tilt (i.e., head rotation around the view axis) is considered. For fractal scenes, which have an isotropic texture, eye and head movements were executed primarily along the cardinal directions in world coordinates. The results also show that eye and head movements are closely linked in space and time in a complementary way, with stimulus-driven eye movements predominantly leading the head movements. Our study is the first to systematically examine eye and head movements in a panoramic VRenvironment, and the results demonstrate that a VR environment constitutes a powerful and informative research alternative to traditional methods for investigating looking behaviour.


Author(s):  
Angie M. Michaiel ◽  
Elliott T.T. Abe ◽  
Cristopher M. Niell

ABSTRACTMany studies of visual processing are conducted in unnatural conditions, such as head- and gaze-fixation. As this radically limits natural exploration of the visual environment, there is much less known about how animals actively use their sensory systems to acquire visual information in natural, goal-directed contexts. Recently, prey capture has emerged as an ethologically relevant behavior that mice perform without training, and that engages vision for accurate orienting and pursuit. However, it is unclear how mice target their gaze during such natural behaviors, particularly since, in contrast to many predatory species, mice have a narrow binocular field and lack foveate vision that would entail fixing their gaze on a specific point in the visual field. Here we measured head and bilateral eye movements in freely moving mice performing prey capture. We find that the majority of eye movements are compensatory for head movements, thereby acting to stabilize the visual scene. During head turns, however, these periods of stabilization are interspersed with non-compensatory saccades that abruptly shift gaze position. Analysis of eye movements relative to the cricket position shows that the saccades do not preferentially select a specific point in the visual scene. Rather, orienting movements are driven by the head, with the eyes following in coordination to sequentially stabilize and recenter the gaze. These findings help relate eye movements in the mouse to other species, and provide a foundation for studying active vision during ethological behaviors in the mouse.


2002 ◽  
Vol 87 (2) ◽  
pp. 912-924 ◽  
Author(s):  
H. Rambold ◽  
A. Churchland ◽  
Y. Selig ◽  
L. Jasmin ◽  
S. G. Lisberger

The vestibuloocular reflex (VOR) generates compensatory eye movements to stabilize visual images on the retina during head movements. The amplitude of the reflex is calibrated continuously throughout life and undergoes adaptation, also called motor learning, when head movements are persistently associated with image motion. Although the floccular-complex of the cerebellum is necessary for VOR adaptation, it is not known whether this function is localized in its anterior or posterior portions, which comprise the ventral paraflocculus and flocculus, respectively. The present paper reports the effects of partial lesions of the floccular-complex in five macaque monkeys, made either surgically or with stereotaxic injection of 3-nitropropionic acid (3-NP). Before and after the lesions, smooth pursuit eye movements were tested during sinusoidal and step-ramp target motion. Cancellation of the VOR was tested by moving a target exactly with the monkey during sinusoidal head rotation. The control VOR was tested during sinusoidal head rotation in the dark and during 30°/s pulses of head velocity. VOR adaptation was studied by having the monkeys wear ×2 or ×0.25 optics for 4–7 days. In two monkeys, bilateral lesions removed all of the flocculus except for parts of folia 1 and 2 but did not produce any deficits in smooth pursuit, VOR adaptation, or VOR cancellation. We conclude that the flocculus alone probably is not necessary for either pursuit or VOR learning. In two monkeys, unilateral lesions including a large fraction of the ventral paraflocculus produced small deficits in horizontal and vertical smooth pursuit, and mild impairments of VOR adaptation and VOR cancellation. We conclude that the ventral paraflocculus contributes to both behaviors. In one monkey, a bilateral lesion of the flocculus and ventral paraflocculus produced severe deficits smooth pursuit and VOR cancellation, and a complete loss of VOR adaptation. Considering all five cases together, there was a strong correlation between the size of the deficits in VOR learning and pursuit. We found the strongest correlation between the behavior deficits and the size of the lesion of the ventral paraflocculus, a weaker but significant correlation for the full floccular complex, and no correlation with the size of the lesion of the flocculus. We conclude that 1) lesions of the floccular complex cause linked deficits in smooth pursuit and VOR adaptation, and 2) the relevant portions of the structure are primarily in the ventral paraflocculus, although the flocculus may participate.


1994 ◽  
Vol 72 (2) ◽  
pp. 928-953 ◽  
Author(s):  
S. G. Lisberger ◽  
T. A. Pavelko ◽  
D. M. Broussard

1. We recorded from neurons in the brain stem of monkeys before and after they had worn magnifying or miniaturizing spectacles to cause changes in the gain of the vestibuloocular reflex (VOR). The gain of the VOR was estimated as eye speed divided by head speed during passive horizontal head rotation in darkness. Electrical stimulation in the cerebellum was used to identify neurons that receive inhibition at monosynaptic latencies from the flocculus and ventral paraflocculus (flocculus target neurons or FTNs). Cells were studied during smooth pursuit eye movements with the head stationary, fixation of different positions, cancellation of the VOR, and the VOR evoked by rapid changes in head velocity. 2. FTNs were divided into two populations according to their responses during pursuit with the head stationary. The two groups showed increased firing during smooth eye motion toward the side of recording (Eye-ipsiversive or E-i) or away from the side of recording (Eye-contraversive or E-c). A higher percentage of FTNs showed increased firing rate for contraversive pursuit when the gain of the VOR was high (> or = 1.6) than when the gain of the VOR was low (< or = 0.4). 3. Changes in the gain of the VOR had a striking effect on the responses during the VOR for the FTNs that were E-c during pursuit with the head stationary. Firing rate increased during contraversive VOR eye movements when the gain of the VOR was high or normal and decreased during contraversive VOR eye movements when the gain of the VOR was low. Changes in the gain of the VOR caused smaller changes in the responses during the VOR of FTNs that were E-i during pursuit with the head stationary. We argue that motor learning in the VOR is the result of changes in the responses of individual FTNs. 4. The responses of E-i and E-c FTNS during cancellation of the VOR depended on the gain of the VOR. Responses tended to be in phase with contraversive head motion when the gain of the VOR was low and in phase with ipsiversive head motion when the gain of the VOR was high. Comparison of the effect of motor learning on the responses of FTNs during cancellation of the VOR with the results of similar experiments on horizontal-gaze velocity Purkinje cells in the flocculus and ventral paraflocculus suggests that the brain stem vestibular inputs to FTNs are one site of motor learning in the VOR.(ABSTRACT TRUNCATED AT 400 WORDS)


2003 ◽  
Vol 13 (2-3) ◽  
pp. 79-91
Author(s):  
Stefano Ramat ◽  
Roberto Schmid ◽  
Daniela Zambarbieri

Passive head rotation in darkness produces vestibular nystagmus, consisting of slow and quick phases. The vestibulo-ocular reflex produces the slow phases, in the compensatory direction, while the fast phases, in the same direction as head rotation, are of saccadic origin. We have investigated how the saccadic components of the ocular motor responses evoked by active head rotation in darkness are generated, assuming the only available sensory information is that provided by the vestibular system. We recorded the eye and head movements of nine normal subjects during active head rotation in darkness. Subjects were instructed to rotate their heads in a sinusoidal-like manner and to focus their attention on producing a smooth head rotation. We found that the desired eye position signal provided to the saccadic mechanism by the vestibular system may be modeled as a linear combination of head velocity and head displacement information. Here we present a mathematical model for the generation of both the slow and quick phases of vestibular nystagmus based on our findings. Simulations of this model accurately fit experimental data recorded from subjects.


2019 ◽  
Vol 122 (5) ◽  
pp. 1946-1961 ◽  
Author(s):  
Harbandhan Kaur Arora ◽  
Vishal Bharmauria ◽  
Xiaogang Yan ◽  
Saihong Sun ◽  
Hongying Wang ◽  
...  

Nonhuman primates have been used extensively to study eye-head coordination and eye-hand coordination, but the combination—eye-head-hand coordination—has not been studied. Our goal was to determine whether reaching influences eye-head coordination (and vice versa) in rhesus macaques. Eye, head, and hand motion were recorded in two animals with search coil and touch screen technology, respectively. Animals were seated in a customized “chair” that allowed unencumbered head motion and reaching in depth. In the reach condition, animals were trained to touch a central LED at waist level while maintaining central gaze and were then rewarded if they touched a target appearing at 1 of 15 locations in a 40° × 20° (visual angle) array. In other variants, initial hand or gaze position was varied in the horizontal plane. In similar control tasks, animals were rewarded for gaze accuracy in the absence of reach. In the Reach task, animals made eye-head gaze shifts toward the target followed by reaches that were accompanied by prolonged head motion toward the target. This resulted in significantly higher head velocities and amplitudes (and lower eye-in-head ranges) compared with the gaze control condition. Gaze shifts had shorter latencies and higher velocities and were more precise, despite the lack of gaze reward. Initial hand position did not influence gaze, but initial gaze position influenced reach latency. These results suggest that eye-head coordination is optimized for visually guided reach, first by quickly and accurately placing gaze at the target to guide reach transport and then by centering the eyes in the head, likely to improve depth vision as the hand approaches the target. NEW & NOTEWORTHY Eye-head and eye-hand coordination have been studied in nonhuman primates but not the combination of all three effectors. Here we examined the timing and kinematics of eye-head-hand coordination in rhesus macaques during a simple reach-to-touch task. Our most novel finding was that (compared with hand-restrained gaze shifts) reaching produced prolonged, increased head rotation toward the target, tending to center the binocular field of view on the target/hand.


2020 ◽  
Vol 123 (1) ◽  
pp. 243-258 ◽  
Author(s):  
Kristin N. Hageman ◽  
Margaret R. Chow ◽  
Dale Roberts ◽  
Charles C. Della Santina

Head rotation, translation, and tilt with respect to a gravitational field elicit reflexive eye movements that partially stabilize images of Earth-fixed objects on the retinas of humans and other vertebrates. Compared with the angular vestibulo-ocular reflex, responses to translation and tilt, collectively called the otolith-ocular reflex (OOR), are less completely characterized, typically smaller, generally disconjugate (different for the 2 eyes) and more complicated in their relationship to the natural stimuli that elicit them. We measured binocular 3-dimensional OOR responses of 6 alert normal chinchillas in darkness during whole body tilts around 16 Earth-horizontal axes and translations along 21 axes in horizontal, coronal, and sagittal planes. Ocular countertilt responses to 40-s whole body tilts about Earth-horizontal axes grew linearly with head tilt amplitude, but responses were disconjugate, with each eye’s response greatest for whole body tilts about axes near the other eye’s resting line of sight. OOR response magnitude during 1-Hz sinusoidal whole body translations along Earth-horizontal axes also grew with stimulus amplitude. Translational OOR responses were similarly disconjugate, with each eye’s response greatest for whole body translations along its resting line of sight. Responses to Earth-horizontal translation were similar to those that would be expected for tilts that would cause a similar peak deviation of the gravitoinertial acceleration (GIA) vector with respect to the head, consistent with the “perceived tilt” model of the OOR. However, that model poorly fit responses to translations along non-Earth-horizontal axes and was insufficient to explain why responses are larger for the eye toward which the GIA vector deviates. NEW & NOTEWORTHY As the first in a pair of papers on Binocular 3D Otolith-Ocular Reflexes, this paper characterizes binocular 3D eye movements in normal chinchillas during tilts and translations. The eye movement responses were used to create a data set to fully define the normal otolith-ocular reflexes in chinchillas. This data set provides the foundation to use otolith-ocular reflexes to back-project direction and magnitude of eye movement to predict tilt axis as discussed in the companion paper.


1991 ◽  
Vol 1 (3) ◽  
pp. 263-277 ◽  
Author(s):  
J.L. Demer ◽  
J. Goldberg ◽  
F.I. Porter ◽  
H.A. Jenkins ◽  
K. Schmidt

Vestibularly and visually driven eye movements interact to compensate for head movements to maintain the necessary retinal image stability for clear vision. The wearing of highly magnifying telescopic spectacles requires that such compensatory visual-vestibular interaction operate in a quantitative regime much more demanding than that normally encountered. We employed electro-oculography to investigate the effect of wearing of 2×, 4×, and 6× binocular telescopic spectacles on visual-vestibular interactions during sinusoidal head rotation in 43 normal subjects. All telescopic spectacle powers produced a large, immediate increase in the gain (eye velocity/head velocity) of compensatory eye movements, called the visual-vestibulo-ocular reflex (VVOR). However, the amount of VVOR gain augmentation became limited as spectacle magnification and the amplitude of head velocity increased. Optokinetic responses during wearing of telescopic spectacles exhibited a similar nonlinearity with respect to stimulus amplitude and spectacle magnification. Computer simulation was used to demonstrate that the nonlinear response of the VVOR with telescopic spectacles is a result of nonlinearities in visually guided tracking movements. Immediate augmentation of VVOR gain by telescopic spectacles declined significantly with increasing age in the subject pool studied. Presentation of unmagnified visual field peripheral to the telescopic spectacles reduced the immediate VVOR gain-enhancing effect of central magnified vision. These results imply that the VVOR may not be adequate to maintain retinal image stability during head movements when strongly magnifying telescopic spectacles are worn.


2019 ◽  
Vol 12 (7) ◽  
Author(s):  
Nicola C. Anderson ◽  
Walter F. Bischof

Video stream: https://vimeo.com/356859979 Production and  publication of the video stream was sponsored by SCIANS Ltd  http://www.scians.ch/ We examined the extent to which image shape (square vs. circle), image rotation, and image content (landscapes vs. fractal images) influenced eye and head movements. Both the eyes and head were tracked while observers looked at natural scenes in a virtual reality (VR) environment. In line with previous work, we found a horizontal bias in saccade directions, but this was affected by both the image shape and its content. Interestingly, when viewing landscapes (but not fractals), observers rotated their head in line with the image rotation, presumably to make saccades in cardinal, rather than oblique, directions. We discuss our findings in relation to current theories on eye movement control, and how insights from VR might inform traditional eyetracking studies. - Part 2: Observers looked at panoramic, 360 degree scenes using VR goggles while eye and head movements were tracked. Fixations were determined using IDT (Salvucci & Goldberg, 2000) adapted to a spherical coordinate system. We then analyzed a) the spatial distribution of fixations and the distribution of saccade directions, b) the spatial distribution of head positions and the distribution of head movements, and c) the relation between gaze and head movements. We found that, for landscape scenes, gaze and head best fit the allocentric frame defined by the scene horizon, especially when taking head tilt (i.e., head rotation around the view axis) into account. For fractal scenes, which are isotropic on average, the bias toward a body-centric frame gaze is weak for gaze and strong for the head. Furthermore, our data show that eye and head movements are closely linked in space and time in stereotypical ways, with volitional eye movements predominantly leading the head. We discuss our results in terms of models of visual exploratory behavior in panoramic scenes, both in virtual and real environments.


2003 ◽  
Vol 3 ◽  
pp. 122-137 ◽  
Author(s):  
George K. Hung

The objective of this article is to determine the effect of three different putting grips (conventional, cross-hand, and one-handed) on variations in eye and head movements during the putting stroke. Seven volunteer novice players, ranging in age from 21 to 22 years, participated in the study. During each experimental session, the subject stood on a specially designed platform covered with artificial turf and putted golf balls towards a standard golf hole. The three different types of grips were tested at two distances: 3 and 9 ft. For each condition, 20 putts were attempted. For each putt, data were recorded over a 3-s interval at a sampling rate of 100 Hz. Eye movements were recorded using a helmet-mounted eye movement monitor. Head rotation about an imaginary axis through the top of the head and its center-of-rotation was measured by means of a potentiometer mounted on a fixed frame and coupled to the helmet. Putter-head motion was measured using a linear array of infrared phototransistors embedded in the platform. The standard deviation (STD, relative to the initial level) was calculated for eye and head movements over the duration of the putt (i.e., from the beginning of the backstroke, through the forward stroke, to impact). The averaged STD for the attempted putts was calculated for each subject. Then, the averaged STDs and other data for the seven subjects were statistically compared across the three grip conditions. The STD of eye movements were greater (p < 0.1) for conventional than cross-hand (9 ft) and one-handed (3 and 9 ft) grips. Also, the STD of head movements were greater (p < 0.1; 3 ft) for conventional than cross-hand and one-handed grips. Vestibulo-ocular responses associated with head rotations could be observed in many 9 ft and some 3 ft putts. The duration of the putt was significantly longer (p < 0.05; 3 and 9 ft) for the one-handed than conventional and cross-hand grips. Finally, performance, or percentage putts made, was significantly better (p <0.05; 9 ft) for cross-hand than conventional grip. The smaller variations, both in eye movements during longer putts and head movements during shorter putts, using cross-hand and one-handed grips may explain why some golfers, based on their playing experience, prefer these over the conventional grip. Also, the longer duration for the one-handed grip, which improves tempo, may explain why some senior players prefer the long-shaft (effectively one-handed grip) putter.


Sign in / Sign up

Export Citation Format

Share Document