Postnatal Development of Three-Dimensional Head Movements in Spontaneously Behaving Unrestrained Rabbits

Author(s):  
Helmut Tegetmeyer
1997 ◽  
Vol 77 (2) ◽  
pp. 654-666 ◽  
Author(s):  
Douglas Tweed

Tweed, Douglas. Three-dimensional model of the human eye-head saccadic system. J. Neurophysiol. 77: 654–666, 1997. Current theories of eye-head gaze shifts deal only with one-dimensional motion, and do not touch on three-dimensional (3-D) issues such as curvature and Donders' laws. I show that recent 3-D data can be explained by a model based on ideas that are well established from one-dimensional studies, with just one new assumption: that the eye is driven toward a 3-D orientation in space that has been chosen so that Listing's law of the eye in head will hold when the eye-head movement is complete. As in previous, one-dimensional models, the eye and head are feedback-guided and the commands specifying desired eye position eye pass through a neural “saturation” so as to stay within the effective oculomotor range. The model correctly predicts the complex, 3-D trajectories of the head, eye in space, and eye in head in a variety of saccade tasks. And when it moves repeatedly to the same target, varying the contributions of eye and head, the model lands in different eye-in-space positions, but these positions differ only in their cyclotorsion about the line of sight, so they all point that line at the target—a behavior also seen in real eye-head saccades. Between movements the model obeys Listing's law of the eye in head and Donders' law of the head on torso, but during certain gaze shifts involving large torsional head movements, it shows marked, 8° deviations from Listing's law. These deviations are the most important untested predictions of the theory. Their experimental refutation would sink the model, whereas confirmation would strongly support its central claim that the eye moves toward a 3-D position in space chosen to obey Listing's law and, therefore, that a Listing operator exists upstream from the eye pulse generator.


2017 ◽  
Vol 7 (1) ◽  
pp. 20160093 ◽  
Author(s):  
Ivo G. Ros ◽  
Partha S. Bhagavatula ◽  
Huai-Ti Lin ◽  
Andrew A. Biewener

Flying animals must successfully contend with obstacles in their natural environments. Inspired by the robust manoeuvring abilities of flying animals, unmanned aerial systems are being developed and tested to improve flight control through cluttered environments. We previously examined steering strategies that pigeons adopt to fly through an array of vertical obstacles (VOs). Modelling VO flight guidance revealed that pigeons steer towards larger visual gaps when making fast steering decisions. In the present experiments, we recorded three-dimensional flight kinematics of pigeons as they flew through randomized arrays of horizontal obstacles (HOs). We found that pigeons still decelerated upon approach but flew faster through a denser array of HOs compared with the VO array previously tested. Pigeons exhibited limited steering and chose gaps between obstacles most aligned to their immediate flight direction, in contrast to VO navigation that favoured widest gap steering. In addition, pigeons navigated past the HOs with more variable and decreased wing stroke span and adjusted their wing stroke plane to reduce contact with the obstacles. Variability in wing extension, stroke plane and wing stroke path was greater during HO flight. Pigeons also exhibited pronounced head movements when negotiating HOs, which potentially serve a visual function. These head-bobbing-like movements were most pronounced in the horizontal (flight direction) and vertical directions, consistent with engaging motion vision mechanisms for obstacle detection. These results show that pigeons exhibit a keen kinesthetic sense of their body and wings in relation to obstacles. Together with aerodynamic flapping flight mechanics that favours vertical manoeuvring, pigeons are able to navigate HOs using simple rules, with remarkable success.


1995 ◽  
Vol 73 (2) ◽  
pp. 766-779 ◽  
Author(s):  
D. Tweed ◽  
B. Glenn ◽  
T. Vilis

1. Three-dimensional (3D) eye and head rotations were measured with the use of the magnetic search coil technique in six healthy human subjects as they made large gaze shifts. The aims of this study were 1) to see whether the kinematic rules that constrain eye and head orientations to two degrees of freedom between saccades also hold during movements; 2) to chart the curvature and looping in eye and head trajectories; and 3) to assess whether the timing and paths of eye and head movements are more compatible with a single gaze error command driving both movements, or with two different feedback loops. 2. Static orientations of the eye and head relative to space are known to resemble the distribution that would be generated by a Fick gimbal (a horizontal axis moving on a fixed vertical axis). We show that gaze point trajectories during eye-head gaze shifts fit the Fick gimbal pattern, with horizontal movements following straight "line of latitude" paths and vertical movements curving like lines of longitude. However, horizontal (and to a lesser extent vertical) movements showed direction-dependent looping, with rightward and leftward (and up and down) saccades tracing slightly different paths. Plots of facing direction (the analogue of gaze direction for the head) also showed the latitude/longitude pattern, without looping. In radial saccades, the gaze point initially moved more vertically than the target direction and then curved; head trajectories were straight. 3. The eye and head components of randomly sequenced gaze shifts were not time locked to one another. The head could start moving at any time from slightly before the eye until 200 ms after, and the standard deviation of this interval could be as large as 80 ms. The head continued moving for a long (up to 400 ms) and highly variable time after the gaze error had fallen to zero. For repeated saccades between the same targets, peak eye and head velocities were directly, but very weakly, correlated; fast eye movements could accompany slow head movements and vice versa. Peak head acceleration and deceleration were also very weakly correlated with eye velocity. Further, the head rotated about an essentially fixed axis, with a smooth bell-shaped velocity profile, whereas the axis of eye rotation relative to the head varied throughout the movement and the velocity profiles were more ragged. 4. Plots of 3D eye orientation revealed strong and consistent looping in eye trajectories relative to space.(ABSTRACT TRUNCATED AT 400 WORDS)


2015 ◽  
Vol 74 (1) ◽  
pp. 20-28
Author(s):  
Yoshiaki Itasaka ◽  
Kazuo Ishikawa ◽  
Eigo Omi ◽  
Kou Koizumi

2003 ◽  
Vol 14 (4) ◽  
pp. 340-346 ◽  
Author(s):  
Mark Wexler

Although visual input is egocentric, at least some visual perceptions and representations are allocentric, that is, independent of the observer's vantage point or motion. Three experiments investigated the visual perception of three-dimensional object motion during voluntary and involuntary motion in human subjects. The results show that the motor command contributes to the objective perception of space: Observers are more likely to apply, consciously and unconsciously, spatial criteria relative to an allocentric frame of reference when they are executing voluntary head movements than while they are undergoing similar involuntary displacements (which lead to a more egocentric bias). Furthermore, details of the motor command are crucial to spatial vision, as allocentric bias decreases or disappears when self-motion and motor command do not match.


Sign in / Sign up

Export Citation Format

Share Document