Reference Frames for Reach Planning in Macaque Dorsal Premotor Cortex

2007 ◽  
Vol 98 (2) ◽  
pp. 966-983 ◽  
Author(s):  
Aaron P. Batista ◽  
Gopal Santhanam ◽  
Byron M. Yu ◽  
Stephen I. Ryu ◽  
Afsheen Afshar ◽  
...  

When a human or animal reaches out to grasp an object, the brain rapidly computes a pattern of muscular contractions that can acquire the target. This computation involves a reference frame transformation because the target's position is initially available only in a visual reference frame, yet the required control signal is a set of commands to the musculature. One of the core brain areas involved in visually guided reaching is the dorsal aspect of the premotor cortex (PMd). Using chronically implanted electrode arrays in two Rhesus monkeys, we studied the contributions of PMd to the reference frame transformation for reaching. PMd neurons are influenced by the locations of reach targets relative to both the arm and the eyes. Some neurons encode reach goals using limb-centered reference frames, whereas others employ eye-centered reference fames. Some cells encode reach goals in a reference frame best described by the combined position of the eyes and hand. In addition to neurons like these where a reference frame could be identified, PMd also contains cells that are influenced by both the eye- and limb-centered locations of reach goals but for which a distinct reference frame could not be determined. We propose two interpretations for these neurons. First, they may encode reach goals using a reference frame we did not investigate, such as intrinsic reference frames. Second, they may not be adequately characterized by any reference frame.

2015 ◽  
Vol 114 (6) ◽  
pp. 3211-3219 ◽  
Author(s):  
J. J. Tramper ◽  
W. P. Medendorp

It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms.


1998 ◽  
Vol 80 (5) ◽  
pp. 2274-2294 ◽  
Author(s):  
Eliana M. Klier ◽  
J. Douglas Crawford

Klier, Eliana M. and J. Douglas Crawford. Human oculomotor system accounts for 3-D eye orientation in the visual-motor transformation for saccades. J. Neurophysiol. 80: 2274–2294, 1998. A recent theoretical investigation has demonstrated that three-dimensional (3-D) eye position dependencies in the geometry of retinal stimulation must be accounted for neurally (i.e., in a visuomotor reference frame transformation) if saccades are to be both accurate and obey Listing's law from all initial eye positions. Our goal was to determine whether the human saccade generator correctly implements this eye-to-head reference frame transformation (RFT), or if it approximates this function with a visuomotor look-up table (LT). Six head-fixed subjects participated in three experiments in complete darkness. We recorded 60° horizontal saccades between five parallel pairs of lights, over a vertical range of ±40° ( experiment 1), and 30° radial saccades from a central target, with the head upright or tilted 45° clockwise/counterclockwise to induce torsional ocular counterroll, under both binocular and monocular viewing conditions ( experiments 2 and 3). 3-D eye orientation and oculocentric target direction (i.e., retinal error) were computed from search coil signals in the right eye. Experiment 1: as predicted, retinal error was a nontrivial function of both target displacement in space and 3-D eye orientation (e.g., horizontally displaced targets could induce horizontal or oblique retinal errors, depending on eye position). These data were input to a 3-D visuomotor LT model, which implemented Listing's law, but predicted position-dependent errors in final gaze direction of up to 19.8°. Actual saccades obeyed Listing's law but did not show the predicted pattern of inaccuracies in final gaze direction, i.e., the slope of actual error, as a function of predicted error, was only −0.01 ± 0.14 (compared with 0 for RFT model and 1.0 for LT model), suggesting near-perfect compensation for eye position. Experiments 2 and 3: actual directional errors from initial torsional eye positions were only a fraction of those predicted by the LT model (e.g., 32% for clockwise and 33% for counterclockwise counterroll during binocular viewing). Furthermore, any residual errors were immediately reduced when visual feedback was provided during saccades. Thus, other than sporadic miscalibrations for torsion, saccades were accurate from all 3-D eye positions. We conclude that 1) the hypothesis of a visuomotor look-up table for saccades fails to account even for saccades made directly toward visual targets, but rather, 2) the oculomotor system takes 3-D eye orientation into account in a visuomotor reference frame transformation. This transformation is probably implemented physiologically between retinotopically organized saccade centers (in cortex and superior colliculus) and the brain stem burst generator.


2019 ◽  
Vol 121 (6) ◽  
pp. 2392-2400 ◽  
Author(s):  
Romy S. Bakker ◽  
Luc P. J. Selen ◽  
W. Pieter Medendorp

In daily life, we frequently reach toward objects while our body is in motion. We have recently shown that body accelerations influence the decision of which hand to use for the reach, possibly by modulating the body-centered computations of the expected reach costs. However, head orientation relative to the body was not manipulated, and hence it remains unclear whether vestibular signals contribute in their head-based sensory frame or in a transformed body-centered reference frame to these cost calculations. To test this, subjects performed a preferential reaching task to targets at various directions while they were sinusoidally translated along the lateral body axis, with their head either aligned with the body (straight ahead) or rotated 18° to the left. As a measure of hand preference, we determined the target direction that resulted in equiprobable right/left-hand choices. Results show that head orientation affects this balanced target angle when the body is stationary but does not further modulate hand preference when the body is in motion. Furthermore, reaction and movement times were larger for reaches to the balanced target angle, resembling a competitive selection process, and were modulated by head orientation when the body was stationary. During body translation, reaction and movement times depended on the phase of the motion, but this phase-dependent modulation had no interaction with head orientation. We conclude that the brain transforms vestibular signals to body-centered coordinates at the early stage of reach planning, when the decision of hand choice is computed. NEW & NOTEWORTHY The brain takes inertial acceleration into account in computing the anticipated biomechanical costs that guide hand selection during whole body motion. Whereas these costs are defined in a body-centered, muscle-based reference frame, the otoliths detect the inertial acceleration in head-centered coordinates. By systematically manipulating head position relative to the body, we show that the brain transforms otolith signals into body-centered coordinates at an early stage of reach planning, i.e., before the decision of hand choice is computed.


1998 ◽  
Vol 80 (3) ◽  
pp. 1132-1150 ◽  
Author(s):  
Driss Boussaoud ◽  
Christophe Jouffrais ◽  
Frank Bremmer

Boussaoud, Driss, Christophe Jouffrais, and Frank Bremmer. Eye position effects on the neuronal activity of dorsal premotor cortex in the macaque monkey. J. Neurophysiol. 80: 1132–1150, 1998. Visual inputs to the brain are mapped in a retinocentric reference frame, but the motor system plans movements in a body-centered frame. This basic observation implies that the brain must transform target coordinates from one reference frame to another. Physiological studies revealed that the posterior parietal cortex may contribute a large part of such a transformation, but the question remains as to whether the premotor areas receive visual information, from the parietal cortex, readily coded in body-centered coordinates. To answer this question, we studied dorsal premotor cortex (PMd) neurons in two monkeys while they performed a conditional visuomotor task and maintained fixation at different gaze angles. Visual stimuli were presented on a video monitor, and the monkeys made limb movements on a panel of three touch pads located at the bottom of the monitor. A trial begins when the monkey puts its hand on the central pad. Then, later in the trial, a colored cue instructed a limb movement to the left touch pad if red or to the right one if green. The cues lasted for a variable delay, the instructed delay period, and their offset served as the go signal. The fixation spot was presented at the center of the screen or at one of four peripheral locations. Because the monkey's head was restrained, peripheral fixations caused a deviation of the eyes within the orbit, but for each fixation angle, the instructional cue was presented at nine locations with constant retinocentric coordinates. After the presentation of the instructional cue, 133 PMd cells displayed a phasic discharge (signal-related activity), 157 were tonically active during the instructed delay period (set-related or preparatory activity), and 104 were active after the go signal in relation to movement (movement-related activity). A large proportion of cells showed variations of the discharge rate in relation to limb movement direction, but only modest proportions were sensitive to the cue's location (signal, 43%; set, 34%; movement, 29%). More importantly, the activity of most neurons (signal, 74%; set, 79%; movement, 79%) varied significantly (analysis of variance, P < 0.05) with orbital eye position. A regression analysis showed that the neuronal activity varied linearly with eye position along the horizontal and vertical axes and can be approximated by a two-dimensional regression plane. These data provide evidence that eye position signals modulate the neuronal activity beyond sensory areas, including those involved in visually guided reaching limb movements. Further, they show that neuronal activity related to movement preparation and execution combines at least two directional parameters: arm movement direction and gaze direction in space. It is suggested that a substantial population of PMd cells codes limb movement direction in a head-centered reference frame.


1991 ◽  
Vol 65 (1-4) ◽  
pp. 1107-1111 ◽  
Author(s):  
Tanya M. Riseman ◽  
Jess H. Brewer

2015 ◽  
Vol 62 (3) ◽  
pp. 1912-1920 ◽  
Author(s):  
Fabio Immovilli ◽  
Claudio Bianchini ◽  
Emilio Lorenzani ◽  
Alberto Bellini ◽  
Emanuele Fornasiero

Author(s):  
Sadegh Vaez-Zadeh

This chapter presents dynamic and steady-state modeling of permanent magnet synchronous (PMS) machines with the help of reference frames. The modeling starts with a machine model in terms of phase variables. An equivalent two-axis model in a stationary reference is then obtained by a reference frame transformation. A further transformation to a two-axis rotor reference frame, with its direct axis aligned with the axis of a permanent magnet rotor pole, is derived. Another transformation to a two-axis stator flux linkage reference frame is also presented. Finally, a motor model in polar coordinates, based on space vector theory, is developed. In this chapter, PMS motor equivalent circuits are drawn, based on the mathematical models where appropriate. Iron losses and iron saturation are also taken into the models. The chapter ends with a brief presentation of the dynamic equation of PMS machines mechanical parts.


Sign in / Sign up

Export Citation Format

Share Document