scholarly journals Relationship between eye movements and freezing of gait during turning in individuals with Parkinson disease

Author(s):  
Ying Wang ◽  
Marc M. van Wanrooij ◽  
Rowena Emaus ◽  
Jorik Nonneke ◽  
Michael X Cohen ◽  
...  

Background Individuals with Parkinson disease can experience freezing of gait: a sudden, brief episode of an inability to move their feet despite the intention to walk . Since turning is the most sensitive condition to provoke freezing-of-gait episodes, and the eyes typically lead turning, we hypothesize that disturbances in saccadic eye movements are related to freezing-of-gait episodes. Objectives This study explores the relationship between freezing-of-gait episodes and saccadic eye movements for gaze shift and gaze stabilization during turning. Methods We analyzed 277 freezing-of-gait episodes provoked among 17 individuals with Parkinson disease during two conditions: self-selected speed and rapid speed 180-degree turns in alternating directions. Eye movements acquired from electrooculography signals were characterized by the average position of gaze, the amplitude of gaze shifts, and the speed of gaze stabilization. We analyzed these variables before and during freezing-of-gait episodes occurring at the different phase angles of a turn. Results Significant off-track changes of the gaze position were observed almost one 180-degree-turn time before freezing-of-gait episodes. In addition, the speed of gaze stabilization significantly decreased during freezing-of-gait episodes. Conclusions We argue that off-track changes of the gaze position could be a predictor of freezing-of-gait episodes due to continued failure in movement-error correction or an insufficient preparation for eye-to-foot coordination during turning. The decline in the speed of gaze stabilization is large during freezing-of-gait episodes given the slowness or stop of body turning. We argue that this could be evidence for a healthy compensatory system in individuals with freezing-of-gait.

2018 ◽  
Vol 71 (9) ◽  
pp. 1860-1872 ◽  
Author(s):  
Stephen RH Langton ◽  
Alex H McIntyre ◽  
Peter JB Hancock ◽  
Helmut Leder

Research has established that a perceived eye gaze produces a concomitant shift in a viewer’s spatial attention in the direction of that gaze. The two experiments reported here investigate the extent to which the nature of the eye movement made by the gazer contributes to this orienting effect. On each trial in these experiments, participants were asked to make a speeded response to a target that could appear in a location toward which a centrally presented face had just gazed (a cued target) or in a location that was not the recipient of a gaze (an uncued target). The gaze cues consisted of either fast saccadic eye movements or slower smooth pursuit movements. Cued targets were responded to faster than uncued targets, and this gaze-cued orienting effect was found to be equivalent for each type of gaze shift both when the gazes were un-predictive of target location (Experiment 1) and counterpredictive of target location (Experiment 2). The results offer no support for the hypothesis that motion speed modulates gaze-cued orienting. However, they do suggest that motion of the eyes per se, regardless of the type of movement, may be sufficient to trigger an orienting effect.


2007 ◽  
Vol 98 (2) ◽  
pp. 696-709 ◽  
Author(s):  
A. G. Constantin ◽  
H. Wang ◽  
J. C. Martinez-Trujillo ◽  
J. D. Crawford

Previous studies suggest that stimulation of lateral intraparietal cortex (LIP) evokes saccadic eye movements toward eye- or head-fixed goals, whereas most single-unit studies suggest that LIP uses an eye-fixed frame with eye-position modulations. The goal of our study was to determine the reference frame for gaze shifts evoked during LIP stimulation in head-unrestrained monkeys. Two macaques ( M1 and M2) were implanted with recording chambers over the right intraparietal sulcus and with search coils for recording three-dimensional eye and head movements. The LIP region was microstimulated using pulse trains of 300 Hz, 100–150 μA, and 200 ms. Eighty-five putative LIP sites in M1 and 194 putative sites in M2 were used in our quantitative analysis throughout this study. Average amplitude of the stimulation-evoked gaze shifts was 8.67° for M1 and 7.97° for M2 with very small head movements. When these gaze-shift trajectories were rotated into three coordinate frames (eye, head, and body), gaze endpoint distribution for all sites was most convergent to a common point when plotted in eye coordinates. Across all sites, the eye-centered model provided a significantly better fit compared with the head, body, or fixed-vector models (where the latter model signifies no modulation of the gaze trajectory as a function of initial gaze position). Moreover, the probability of evoking a gaze shift from any one particular position was modulated by the current gaze direction (independent of saccade direction). These results provide causal evidence that the motor commands from LIP encode gaze command in eye-fixed coordinates but are also subtly modulated by initial gaze position.


2008 ◽  
Vol 100 (4) ◽  
pp. 1848-1867 ◽  
Author(s):  
Sigrid M. C. I. van Wetter ◽  
A. John van Opstal

Such perisaccadic mislocalization is maximal in the direction of the saccade and varies systematically with the target-saccade onset delay. We have recently shown that under head-fixed conditions perisaccadic errors do not follow the quantitative predictions of current visuomotor models that explain these mislocalizations in terms of spatial updating. These models all assume sluggish eye-movement feedback and therefore predict that errors should vary systematically with the amplitude and kinematics of the intervening saccade. Instead, we reported that errors depend only weakly on the saccade amplitude. An alternative explanation for the data is that around the saccade the perceived target location undergoes a uniform transient shift in the saccade direction, but that the oculomotor feedback is, on average, accurate. This “ visual shift” hypothesis predicts that errors will also remain insensitive to kinematic variability within much larger head-free gaze shifts. Here we test this prediction by presenting a brief visual probe near the onset of gaze saccades between 40 and 70° amplitude. According to models with inaccurate gaze-motor feedback, the expected perisaccadic errors for such gaze shifts should be as large as 30° and depend heavily on the kinematics of the gaze shift. In contrast, we found that the actual peak errors were similar to those reported for much smaller saccadic eye movements, i.e., on average about 10°, and that neither gaze-shift amplitude nor kinematics plays a systematic role. Our data further corroborate the visual origin of perisaccadic mislocalization under open-loop conditions and strengthen the idea that efferent feedback signals in the gaze-control system are fast and accurate.


2011 ◽  
Vol 106 (4) ◽  
pp. 2000-2011 ◽  
Author(s):  
Luis C. Populin ◽  
Abigail Z. Rajala

We have studied eye-head coordination in nonhuman primates with acoustic targets after finding that they are unable to make accurate saccadic eye movements to targets of this type with the head restrained. Three male macaque monkeys with experience in localizing sounds for rewards by pointing their gaze to the perceived location of sources served as subjects. Visual targets were used as controls. The experimental sessions were configured to minimize the chances that the subject would be able to predict the modality of the target as well as its location and time of presentation. The data show that eye and head movements are coordinated differently to generate gaze shifts to acoustic targets. Chiefly, the head invariably started to move before the eye and contributed more to the gaze shift. These differences were more striking for gaze shifts of <20–25° in amplitude, to which the head contributes very little or not at all when the target is visual. Thus acoustic and visual targets trigger gaze shifts with different eye-head coordination. This, coupled to the fact that anatomic evidence involves the superior colliculus as the link between auditory spatial processing and the motor system, suggests that separate signals are likely generated within this midbrain structure.


1995 ◽  
Vol 3 (2) ◽  
pp. 199-223 ◽  
Author(s):  
Boris M. Velichkovsky

The results of two experiments, in which participants solved constructive tasks of the puzzle type, are reported. The tasks were solved by two partners who shared the same visual environment hut whose knowledge of the situation and ability to change it to reach a solution were different. One of the partners — the "expert" — knew the solution in detail but had no means of acting on this information. The second partner — the "novice " — could act to achieve the goal, but knew very little about the solution. The partners were free to communicate verbally. In one third of the trials of the first experiment, in addition to verbal communication, the eye fixations of the expert were projected onto the working space of the novice. In another condition the expert could use a mouse to show the novice relevant parts of the task configuration. Both methods of facilitating the 'joint attention' state of the partners improved their performance. The nature of the dialogues as well as the parameters of the eye movements changed. In the second experiment the direction of the gaze-position data transfer was reversed, from the novice to the expert. This also led to a significant increase in the efficiency of the distributed problem solving.


2013 ◽  
Vol 37 (2) ◽  
pp. 131-136 ◽  
Author(s):  
Atsushi Senju ◽  
Angélina Vernetti ◽  
Yukiko Kikuchi ◽  
Hironori Akechi ◽  
Toshikazu Hasegawa ◽  
...  

The current study investigated the role of cultural norms on the development of face-scanning. British and Japanese adults’ eye movements were recorded while they observed avatar faces moving their mouth, and then their eyes toward or away from the participants. British participants fixated more on the mouth, which contrasts with Japanese participants fixating mainly on the eyes. Moreover, eye fixations of British participants were less affected by the gaze shift of the avatar than Japanese participants, who shifted their fixation to the corresponding direction of the avatar’s gaze. Results are consistent with the Western cultural norms that value the maintenance of eye contact, and the Eastern cultural norms that require flexible use of eye contact and gaze aversion.


Vision ◽  
2021 ◽  
Vol 5 (3) ◽  
pp. 39
Author(s):  
Julie Royo ◽  
Fabrice Arcizet ◽  
Patrick Cavanagh ◽  
Pierre Pouget

We introduce a blind spot method to create image changes contingent on eye movements. One challenge of eye movement research is triggering display changes contingent on gaze. The eye-tracking system must capture the image of the eye, discover and track the pupil and corneal reflections to estimate the gaze position, and then transfer this data to the computer that updates the display. All of these steps introduce delays that are often difficult to predict. To avoid these issues, we describe a simple blind spot method to generate gaze contingent display manipulations without any eye-tracking system and/or display controls.


1990 ◽  
Vol 64 (2) ◽  
pp. 509-531 ◽  
Author(s):  
D. Guitton ◽  
D. P. Munoz ◽  
H. L. Galiana

1. Orienting movements, consisting of coordinated eye and head displacements, direct the visual axis to the source of a sensory stimulus. A recent hypothesis suggests that the CNS may control gaze position (gaze = eye-relative-to-space = eye-relative-to-head + head-relative-to-space) by the use of a feedback circuit wherein an internally derived representation of gaze motor error drives both eye and head premotor circuits. In this paper we examine the effect of behavioral task on the individual and summed trajectories of horizontal eye- and head-orienting movements to gain more insight into how the eyes and head are coupled and controlled in different behavioral situations. 2. Cats whose heads were either restrained (head-fixed) or unrestrained (head-free) were trained to make orienting movements of any desired amplitude in a simple cat-and-mouse game we call the barrier paradigm. A rectangular opaque barrier was placed in front of the hungry animal who either oriented to a food target that was visible to one side of the barrier or oriented to a location on an edge of the barrier where it predicted the target would reappear from behind the barrier. 3. The dynamics (e.g., maximum velocity) and duration of eye- and head-orienting movements were affected by the task. Saccadic eye movements (head-fixed) elicited by the visible target attained greater velocity and had shorter durations than comparable amplitude saccades directed toward the predicted target. A similar observation has been made in human and monkey. In addition, when the head was unrestrained both the eye and head movements (and therefore gaze movements) were faster and shorter in the visible- compared with the predicted-target conditions. Nevertheless, the relative contributions of the eye and head to the overall gaze displacement remained task independent: i.e., the distance traveled by the eye and head movements was determined by the size of the gaze shift only. This relationship was maintained because the velocities of the eye and head movements covaried in the different behavioral situations. Gaze-velocity profiles also had characteristic shapes that were dependent on task. In the predicted-target condition these profiles tended to have flattened peaks, whereas when the target was visible the peaks were sharper. 4. Presentation of a visual cue (e.g., reappearance of food target) immediately before (less than 50 ms) the onset of a gaze shift to a predicted target triggered a midflight increase in first the eye- and, after approximately 20 ms, the head-movement velocity.(ABSTRACT TRUNCATED AT 400 WORDS)


2009 ◽  
Vol 101 (3) ◽  
pp. 1258-1266 ◽  
Author(s):  
Daniel J. Tollin ◽  
Janet L. Ruhland ◽  
Tom C. T. Yin

The mammalian orienting response to sounds consists of a gaze shift that can be a combination of head and eye movements. In animals with mobile pinnae, the ears also move. During head movements, vision is stabilized by compensatory rotations of the eyeball within the head because of the vestibulo-ocular reflex (VOR). While studying the gaze shifts made by cats to sounds, a previously uncharacterized compensatory movement was discovered. The pinnae exhibited short-latency, goal-directed movements that reached their target while the head was still moving. The pinnae maintained a fixed position in space by counter-rotating on the head with an equal but opposite velocity to the head movement. We call these compensatory ear movements the vestibulo-auricular reflex (VAR) because they shared many kinematic characteristics with the VOR. Control experiments ruled out efference copy of head position signals and acoustic tracking (audiokinetic) of the source as the cause of the response. The VAR may serve to stabilize the auditory world during head movements.


Sign in / Sign up

Export Citation Format

Share Document