Properties of Cerebellar Fastigial Neurons During Translation, Rotation, and Eye Movements

2005 ◽  
Vol 93 (2) ◽  
pp. 853-863 ◽  
Author(s):  
Aasef G. Shaikh ◽  
Fatema F. Ghasia ◽  
J. David Dickman ◽  
Dora E. Angelaki

The most medial of the deep cerebellar nuclei, the fastigial nucleus (FN), receives sensory vestibular information and direct inhibition from the cerebellar vermis. We investigated the signal processing in the primate FN by recording single-unit activities during translational motion, rotational motion, and eye movements. Firing rate modulation during horizontal plane translation in the absence of eye movements was observed in all non-eye-movement-sensitive cells and 26% of the pursuit eye-movement-sensitive neurons in the caudal FN. Many non-eye-movement-sensitive cells recorded in the rostral FN of three fascicularis monkeys exhibited convergence of signals from both the otolith organs and the semicircular canals. At low frequencies of translation, the majority of these rostral FN cells changed their firing rates in phase with head velocity rather than linear acceleration. As frequency increased, FN vestibular neurons exhibited a wide range of response dynamics with most cells being characterized by increasing phase leads as a function of frequency. Unlike cells in the vestibular nuclei, none of the rostral FN cells responded to rotational motion alone, without simultaneously exhibiting sensitivity to translational motion. Modulation during earth-horizontal axis rotation was observed in more than half (77%) of the neurons, although with smaller gains than during translation. In contrast, only 47% of the cells changed their firing rates during earth-vertical axis rotations in the absence of a dynamic linear acceleration stimulus. These response properties suggest that the rostral FN represents a main processing center of otolith-driven information for inertial motion detection and spatial orientation.

2008 ◽  
Vol 100 (3) ◽  
pp. 1488-1497 ◽  
Author(s):  
Kimberly L. McArthur ◽  
J. David Dickman

Gaze-stabilizing eye and head responses compensate more effectively for low-frequency rotational motion when such motion stimulates the otolith organs, as during earth-horizontal axis rotations. However, the nature of the otolith signal responsible for this improvement in performance has not been previously determined. In this study, we used combinations of earth-horizontal axis rotational and translational motion to manipulate the magnitude of net linear acceleration experienced by pigeons, under both head-fixed and head-free conditions. We show that phase enhancement of eye and head responses to low-frequency rotational motion was causally related to the magnitude of dynamic net linear acceleration and not the gravitational acceleration component. We also show that canal-driven and otolith-driven eye responses were both spatially and temporally appropriate to combine linearly, and that a simple linear model combining canal- and otolith-driven components predicted eye responses to complex motion that were consistent with our experimental observations. However, the same model did not predict the observed head responses, which were spatially but not temporally appropriate to combine according to the same linear scheme. These results suggest that distinct vestibular processing substrates exist for eye and head responses in pigeons and that these are likely different from the vestibular processing substrates observed in primates.


1999 ◽  
Vol 82 (5) ◽  
pp. 2612-2632 ◽  
Author(s):  
Pierre A. Sylvestre ◽  
Kathleen E. Cullen

The mechanics of the eyeball and its surrounding tissues, which together form the oculomotor plant, have been shown to be the same for smooth pursuit and saccadic eye movements. Hence it was postulated that similar signals would be carried by motoneurons during slow and rapid eye movements. In the present study, we directly addressed this proposal by determining which eye movement–based models best describe the discharge dynamics of primate abducens neurons during a variety of eye movement behaviors. We first characterized abducens neuron spike trains, as has been classically done, during fixation and sinusoidal smooth pursuit. We then systematically analyzed the discharge dynamics of abducens neurons during and following saccades, during step-ramp pursuit and during high velocity slow-phase vestibular nystagmus. We found that the commonly utilized first-order description of abducens neuron firing rates (FR = b + kE + rE˙, where FR is firing rate, E and E˙ are eye position and velocity, respectively, and b, k, and r are constants) provided an adequate model of neuronal activity during saccades, smooth pursuit, and slow phase vestibular nystagmus. However, the use of a second-order model, which included an exponentially decaying term or “slide” (FR = b + kE + rE˙ + uË − c[Formula: see text]), notably improved our ability to describe neuronal activity when the eye was moving and also enabled us to model abducens neuron discharges during the postsaccadic interval. We also found that, for a given model, a single set of parameters could not be used to describe neuronal firing rates during both slow and rapid eye movements. Specifically, the eye velocity and position coefficients ( r and k in the above models, respectively) consistently decreased as a function of the mean (and peak) eye velocity that was generated. In contrast, the bias ( b, firing rate when looking straight ahead) invariably increased with eye velocity. Although these trends are likely to reflect, in part, nonlinearities that are intrinsic to the extraocular muscles, we propose that these results can also be explained by considering the time-varying resistance to movement that is generated by the antagonist muscle. We conclude that to create realistic and meaningful models of the neural control of horizontal eye movements, it is essential to consider the activation of the antagonist, as well as agonist motoneuron pools.


1992 ◽  
Vol 68 (1) ◽  
pp. 319-332 ◽  
Author(s):  
J. L. McFarland ◽  
A. F. Fuchs

1. Monkeys were trained to perform a variety of horizontal eye tracking tasks designed to reveal possible eye movement and vestibular sensitivities of neurons in the medulla. To test eye movement sensitivity, we required stationary monkeys to track a small spot that moved horizontally. To test vestibular sensitivity, we rotated the monkeys about a vertical axis and required them to fixate a target rotating with them to suppress the vestibuloocular reflex (VOR). 2. All of the 100 units described in our study were recorded from regions of the medulla that were prominently labeled after injections of horseradish peroxidase into the abducens nucleus. These regions include the nucleus prepositus hypoglossi (NPH), the medial vestibular nucleus (MVN), and their common border (the “marginal zone”). We report here the activities of three different types of neurons recorded in these regions. 3. Two types responded only during eye movements per se. Their firing rates increased with eye position; 86% had ipsilateral “on” directions. Almost three quarters (73%) of these medullary neurons exhibited a burst-tonic discharge pattern that is qualitatively similar to that of abducens motoneurons. There were, however, quantitative differences in that these medullary burst-position neurons were less sensitive to eye position than were abducens motoneurons and often did not pause completely for saccades in the off direction. The burst of medullary burst position neurons preceded the saccade by an average of 7.6 +/- 1.7 (SD) ms and, on average, lasted the duration of the saccade. The number of spikes in the burst was well correlated with saccade size. The second type of eye movement neuron displayed either no discernible burst or an inconsistent one for on-direction saccades and will be referred to as medullary position neurons. Neither the burst-position nor the position neurons responded when the animals suppressed the VOR; hence, they displayed no vestibular sensitivity. 4. The third type of neuron was sensitive to both eye movement and vestibular stimulation. These neurons increased their firing rates during horizontal head rotation and smooth pursuit eye movements in the same direction; most (76%) preferred ipsilateral head and eye movements. Their firing rates were approximately in phase with eye velocity during sinusoidal smooth pursuit and with head velocity during VOR suppression; on average, their eye velocity sensitivity was 50% greater than their vestibular sensitivity. Sixty percent of these eye/head velocity cells were also sensitive to eye position. 5. The NPH/MVN region contains many neurons that could provide an eye position signal to abducens neurons.(ABSTRACT TRUNCATED AT 400 WORDS)


2018 ◽  
Vol 119 (1) ◽  
pp. 73-83 ◽  
Author(s):  
Shawn D. Newlands ◽  
Ben Abbatematteo ◽  
Min Wei ◽  
Laurel H. Carney ◽  
Hongge Luan

Roughly half of all vestibular nucleus neurons without eye movement sensitivity respond to both angular rotation and linear acceleration. Linear acceleration signals arise from otolith organs, and rotation signals arise from semicircular canals. In the vestibular nerve, these signals are carried by different afferents. Vestibular nucleus neurons represent the first point of convergence for these distinct sensory signals. This study systematically evaluated how rotational and translational signals interact in single neurons in the vestibular nuclei: multisensory integration at the first opportunity for convergence between these two independent vestibular sensory signals. Single-unit recordings were made from the vestibular nuclei of awake macaques during yaw rotation, translation in the horizontal plane, and combinations of rotation and translation at different frequencies. The overall response magnitude of the combined translation and rotation was generally less than the sum of the magnitudes in responses to the stimuli applied independently. However, we found that under conditions in which the peaks of the rotational and translational responses were coincident these signals were approximately additive. With presentation of rotation and translation at different frequencies, rotation was attenuated more than translation, regardless of which was at a higher frequency. These data suggest a nonlinear interaction between these two sensory modalities in the vestibular nuclei, in which coincident peak responses are proportionally stronger than other, off-peak interactions. These results are similar to those reported for other forms of multisensory integration, such as audio-visual integration in the superior colliculus. NEW & NOTEWORTHY This is the first study to systematically explore the interaction of rotational and translational signals in the vestibular nuclei through independent manipulation. The results of this study demonstrate nonlinear integration leading to maximum response amplitude when the timing and direction of peak rotational and translational responses are coincident.


2000 ◽  
Vol 84 (4) ◽  
pp. 2001-2015 ◽  
Author(s):  
L. H. Zupan ◽  
R. J. Peterka ◽  
D. M. Merfeld

Sensory systems often provide ambiguous information. Integration of various sensory cues is required for the CNS to resolve sensory ambiguity and elicit appropriate responses. The vestibular system includes two types of sensors: the semicircular canals, which measure head rotation, and the otolith organs, which measure gravito-inertial force (GIF), the sum of gravitational force and inertial force due to linear acceleration. According to Einstein's equivalence principle, gravitational force is indistinguishable from inertial force due to linear acceleration. As a consequence, otolith measurements must be supplemented with other sensory information for the CNS to distinguish tilt from translation. The GIF resolution hypothesis states that the CNS estimates gravity and linear acceleration, so that the difference between estimates of gravity and linear acceleration matches the measured GIF. Both otolith and semicircular canal cues influence this estimation of gravity and linear acceleration. The GIF resolution hypothesis predicts that inaccurate estimates of both gravity and linear acceleration can occur due to central interactions of sensory cues. The existence of specific patterns of vestibuloocular reflexes (VOR) related to these inaccurate estimates can be used to test the GIF resolution hypothesis. To investigate this hypothesis, we measured eye movements during two different protocols. In one experiment, eight subjects were rotated at a constant velocity about an earth-vertical axis and then tilted 90° in darkness to one of eight different evenly spaced final orientations, a so-called “dumping” protocol. Three speeds (200, 100, and 50°/s) and two directions, clockwise (CW) and counterclockwise (CCW), of rotation were tested. In another experiment, four subjects were rotated at a constant velocity (200°/s, CW and CCW) about an earth-horizontal axis and stopped in two different final orientations (nose-up and nose-down), a so-called “barbecue” protocol. The GIF resolution hypothesis predicts that post-rotatory horizontal VOR eye movements for both protocols should include an “induced” VOR component, compensatory to an interaural estimate of linear acceleration, even though no true interaural linear acceleration is present. The GIF resolution hypothesis accurately predicted VOR and induced VOR dependence on rotation direction, rotation speed, and head orientation. Alternative hypotheses stating that frequency segregation may discriminate tilt from translation or that the post-rotatory VOR time constant is dependent on head orientation with respect to the GIF direction did not predict the observed VOR for either experimental protocol.


2018 ◽  
Vol 5 (8) ◽  
pp. 180502 ◽  
Author(s):  
Roy S. Hessels ◽  
Diederick C. Niehorster ◽  
Marcus Nyström ◽  
Richard Andersson ◽  
Ignace T. C. Hooge

Eye movements have been extensively studied in a wide range of research fields. While new methods such as mobile eye tracking and eye tracking in virtual/augmented realities are emerging quickly, the eye-movement terminology has scarcely been revised. We assert that this may cause confusion about two of the main concepts: fixations and saccades. In this study, we assessed the definitions of fixations and saccades held in the eye-movement field, by surveying 124 eye-movement researchers. These eye-movement researchers held a variety of definitions of fixations and saccades, of which the breadth seems even wider than what is reported in the literature. Moreover, these definitions did not seem to be related to researcher background or experience. We urge researchers to make their definitions more explicit by specifying all the relevant components of the eye movement under investigation: (i) the oculomotor component: e.g. whether the eye moves slow or fast; (ii) the functional component: what purposes does the eye movement (or lack thereof) serve; (iii) the coordinate system used: relative to what does the eye move; (iv) the computational definition: how is the event represented in the eye-tracker signal. This should enable eye-movement researchers from different fields to have a discussion without misunderstandings.


1995 ◽  
Vol 115 (sup520) ◽  
pp. 372-376 ◽  
Author(s):  
M. Hashiba ◽  
S. Watanabe ◽  
H. Watabe ◽  
T. Matsuoka ◽  
S. Baba ◽  
...  

2019 ◽  
Author(s):  
Jean Laurens ◽  
Dora E. Angelaki

AbstractTheories of cerebellar functions posit that the cerebellum implements forward models for online correction of motor actions and sensory estimation. As an example of such computations, a forward model compensates for a sensory ambiguity where the peripheral otolith organs in the inner ear sense both head tilts and translations. Here we exploit the response dynamics of two functionally-coupled Purkinje cell types in the caudal vermis to understand their role in this computation. We find that one population encodes tilt velocity, whereas the other, translation-selective, population encodes linear acceleration. Using a dynamical model, we further show that these signals likely represent sensory prediction error for the on-line updating of tilt and translation estimates. These properties also reveal the need for temporal integration between the tilt-selective velocity and translation-selective acceleration population signals. We show that a simple model incorporating a biologically plausible short time constant can mediate the required temporal integration.


2005 ◽  
Vol 93 (6) ◽  
pp. 3418-3433 ◽  
Author(s):  
Hui Meng ◽  
Andrea M. Green ◽  
J. David Dickman ◽  
Dora E. Angelaki

Under natural conditions, the vestibular and pursuit systems work synergistically to stabilize the visual scene during movement. How translational vestibular signals [translational vestibuloocular reflex (TVOR)] are processed in the premotor pathways for slow eye movements continues to remain a challenging question. To further our understanding of how premotor neurons contribute to this processing, we recorded neural activities from the prepositus and rostral medial vestibular nuclei in macaque monkeys. Vestibular neurons were tested during 0.5-Hz rotation and lateral translation (both with gaze stable and during VOR cancellation tasks), as well as during smooth pursuit eye movements. Data were collected at two different viewing distances, 80 and 20 cm. Based on their responses to rotation and pursuit, eye-movement–sensitive neurons were classified into position–vestibular–pause (PVP) neurons, eye–head (EH) neurons, and burst–tonic (BT) cells. We found that approximately half of the type II PVP and EH neurons with ipsilateral eye movement preference were modulated during TVOR cancellation. In contrast, few of the EH and none of the type I PVP cells with contralateral eye movement preference modulated during translation in the absence of eye movements; nor did any of the BT neurons change their firing rates during TVOR cancellation. Of the type II PVP and EH neurons that modulated during TVOR cancellation, cell firing rates increased for either ipsilateral or contralateral displacement, a property that could not be predicted on the basis of their rotational or pursuit responses. In contrast, under stable gaze conditions, all neuron types, including EH cells, were modulated during translation according to their ipsilateral/contralateral preference for pursuit eye movements. Differences in translational response sensitivities for far versus near targets were seen only in type II PVP and EH cells. There was no effect of viewing distance on response phase for any cell type. When expressed relative to motor output, neural sensitivities during translation (although not during rotation) and pursuit were equivalent, particularly for the 20-cm viewing distance. These results suggest that neural activities during the TVOR were more motorlike compared with cell responses during the rotational vestibuloocular reflex (RVOR). We also found that neural responses under stable gaze conditions could not always be predicted by a linear vectorial addition of the cell activities during pursuit and VOR cancellation. The departure from linearity was more pronounced for the TVOR under near-viewing conditions. These results extend previous observations for the neural processing of otolith signals within the premotor circuitry that generates the RVOR and smooth pursuit eye movements.


2019 ◽  
Vol 121 (2) ◽  
pp. 646-661 ◽  
Author(s):  
Marie E. Bellet ◽  
Joachim Bellet ◽  
Hendrikje Nienborg ◽  
Ziad M. Hafed ◽  
Philipp Berens

Saccades are ballistic eye movements that rapidly shift gaze from one location of visual space to another. Detecting saccades in eye movement recordings is important not only for studying the neural mechanisms underlying sensory, motor, and cognitive processes, but also as a clinical and diagnostic tool. However, automatically detecting saccades can be difficult, particularly when such saccades are generated in coordination with other tracking eye movements, like smooth pursuits, or when the saccade amplitude is close to eye tracker noise levels, like with microsaccades. In such cases, labeling by human experts is required, but this is a tedious task prone to variability and error. We developed a convolutional neural network to automatically detect saccades at human-level accuracy and with minimal training examples. Our algorithm surpasses state of the art according to common performance metrics and could facilitate studies of neurophysiological processes underlying saccade generation and visual processing. NEW & NOTEWORTHY Detecting saccades in eye movement recordings can be a difficult task, but it is a necessary first step in many applications. We present a convolutional neural network that can automatically identify saccades with human-level accuracy and with minimal training examples. We show that our algorithm performs better than other available algorithms, by comparing performance on a wide range of data sets. We offer an open-source implementation of the algorithm as well as a web service.


Sign in / Sign up

Export Citation Format

Share Document