Behavioral Reference Frames for Planning Human Reaching Movements

2006 ◽  
Vol 96 (1) ◽  
pp. 352-362 ◽  
Author(s):  
Sabine M. Beurze ◽  
Stan Van Pelt ◽  
W. Pieter Medendorp

At some stage in the process of a sensorimotor transformation for a reaching movement, information about the current position of the hand and information about the location of the target must be encoded in the same frame of reference to compute the hand-to-target difference vector. Two main hypotheses have been proposed regarding this reference frame: an eye-centered and a body-centered frame. Here we evaluated these hypotheses using the pointing errors that subjects made when planning and executing arm movements to memorized targets starting from various initial hand positions while keeping gaze fixed in various directions. One group of subjects ( n = 10) was tested without visual information about hand position during movement planning (unseen-hand condition); another group ( n = 8) was tested with hand and target position simultaneously visible before movement onset (seen-hand condition). We found that both initial hand position and gaze fixation direction had a significant effect on the magnitude and direction of the pointing error. Errors were significantly smaller in the seen-hand condition. For both conditions, though, a reference frame analysis showed that the errors arose at an eye- or hand-centered stage or both, but not at a body-centered stage. As a common reference frame is required to specify a movement vector, these results suggest that an eye-centered mechanism is involved in integrating target and hand position in programming reaching movements. We discuss how simple gain elements modulating the eye-centered target and hand-position signals can account for these results.

2013 ◽  
Vol 26 (5) ◽  
pp. 465-482 ◽  
Author(s):  
Michelle L. Cadieux ◽  
David I. Shore

Performance on tactile temporal order judgments (TOJs) is impaired when the hands are crossed over the midline. The cause of this effect appears to be tied to the use of an external reference frame, most likely based on visual information. We measured the effect of degrading the external reference frame on the crossed-hand deficit through restriction of visual information across three experiments. Experiments 1 and 2 examined three visual conditions (eyes open–lights on, eyes open–lights off, and eyes closed–lights off) while manipulating response demands; no effect of visual condition was seen. In Experiment 3, response demands were altered to be maximally connected to the internal reference frame and only two visual conditions were tested: eyes open–lights on, eyes closed–lights off. Blindfolded participants had a reduced crossed-hands deficit. Results are discussed in terms of the time needed to recode stimuli from an internal to an external reference frame and the role of conflict between these two reference frames in causing this effect.


2021 ◽  
Vol 14 ◽  
Author(s):  
Charlotte Doussot ◽  
Olivier J. N. Bertrand ◽  
Martin Egelhaaf

Bumblebees perform complex flight maneuvers around the barely visible entrance of their nest upon their first departures. During these flights bees learn visual information about the surroundings, possibly including its spatial layout. They rely on this information to return home. Depth information can be derived from the apparent motion of the scenery on the bees' retina. This motion is shaped by the animal's flight and orientation: Bees employ a saccadic flight and gaze strategy, where rapid turns of the head (saccades) alternate with flight segments of apparently constant gaze direction (intersaccades). When during intersaccades the gaze direction is kept relatively constant, the apparent motion contains information about the distance of the animal to environmental objects, and thus, in an egocentric reference frame. Alternatively, when the gaze direction rotates around a fixed point in space, the animal perceives the depth structure relative to this pivot point, i.e., in an allocentric reference frame. If the pivot point is at the nest-hole, the information is nest-centric. Here, we investigate in which reference frames bumblebees perceive depth information during their learning flights. By precisely tracking the head orientation, we found that half of the time, the head appears to pivot actively. However, only few of the corresponding pivot points are close to the nest entrance. Our results indicate that bumblebees perceive visual information in several reference frames when they learn about the surroundings of a behaviorally relevant location.


1987 ◽  
Vol 39 (3) ◽  
pp. 541-559 ◽  
Author(s):  
Digby Elliott ◽  
John Madalena

Three experiments were conducted to determine whether a visual representation of the movement environment, useful for movement control, exists after visual occlusion. In Experiment 1 subjects moved a stylus to small targets in five different visual conditions. As in other studies (e.g. Elliott and Allard, 1985), subjects moved to the targets in a condition involving full visual information (lights on) and a condition in which the lights were extinguished upon movement initiation (lights off). Subjects also pointed to the targets under conditions in which the lights went off 2, 5 and 10 sec prior to movement initiation. While typical lights-on-lights-off differences in accuracy were obtained in this experiment (Keele and Posner, 1968), the more striking finding was the influence of the pointing delay on movement accuracy. Specifically, subjects exhibited a twofold increase in pointing error after only 2 sec of visual occlusion prior to movement initiation. In Experiment 2, we were able to replicate our 2-sec pointing delay effect with a between-subjects design, providing evidence that the results in Experiment 1 were not due to asymmetrical transfer effects. In a third experiment, the delay effect was reduced by making the target position visible in all lights-off situations. Together, the findings provide evidence for the existence of a brief (< 2 sec) visual representation of the environment useful in the control of aiming movements.


1995 ◽  
Vol 73 (1) ◽  
pp. 361-372 ◽  
Author(s):  
C. Ghez ◽  
J. Gordon ◽  
M. F. Ghilardi

1. The aim of this study was to determine how vision of a cursor indicating hand position on a computer screen or vision of the limb itself improves the accuracy of reaching movements in patients deprived of limb proprioception due to large-fiber sensory neuropathy. In particular, we wished to ascertain the contribution of such information to improved planning rather than to feedback corrections. We analyzed spatial errors and hand trajectories of reaching movements made by subjects moving a hand-held cursor on a digitizing tablet while viewing targets displayed on a computer screen. The errors made when movements were performed without vision of their arm or of a screen cursor were compared with errors made when this information was available concurrently or prior to movement. 2. Both monitoring the screen cursor and seeing their limb in peripheral vision during movement improved the accuracy of the patients' movements. Improvements produced by seeing the cursor during movement are attributable simply to feedback corrections. However, because the target was not present in the actual workspace, improvements associated with vision of the limb must involve more complex corrective mechanisms. 3. Significant improvements in performance also occurred in trials without vision that were performed after viewing the limb at rest or during movements. In particular, prior vision of the limb in motion improved the ability of patients to vary the duration of movements in different directions so as to compensate for the inertial anisotropy of the limb. In addition, there were significant reductions in directional errors, path curvature, and late secondary movements. Comparable improvements in extent, direction, and curvature were produced when subjects could see the screen cursor during alternate movements to targets in different directions. 4. The effects of viewing the limb were transient and decayed during a period of minutes once vision of the limb was no longer available. 5. It is proposed that the improvements in performance produced after vision of the limb were mediated by the visual updating of internal models of the limb. Vision of the limb at rest may provide configuration information while vision of the limb in motion provides additional dynamic information. Vision of the cursor and the resulting ability to correct ongoing movements, however, is considered primarily to provide information about the dynamic properties of the limb and its response to neural commands.


2012 ◽  
Vol 107 (12) ◽  
pp. 3433-3445 ◽  
Author(s):  
Alessandra Sciutti ◽  
Laurent Demougeot ◽  
Bastien Berret ◽  
Simone Toma ◽  
Giulio Sandini ◽  
...  

When submitted to a visuomotor rotation, subjects show rapid adaptation of visually guided arm reaching movements, indicated by a progressive reduction in reaching errors. In this study, we wanted to make a step forward by investigating to what extent this adaptation also implies changes into the motor plan. Up to now, classical visuomotor rotation paradigms have been performed on the horizontal plane, where the reaching motor plan in general requires the same kinematics (i.e., straight path and symmetric velocity profile). To overcome this limitation, we considered vertical and horizontal movement directions requiring specific velocity profiles. This way, a change in the motor plan due to the visuomotor conflict would be measurable in terms of a modification in the velocity profile of the reaching movement. Ten subjects performed horizontal and vertical reaching movements while observing a rotated visual feedback of their motion. We found that adaptation to a visuomotor rotation produces a significant change in the motor plan, i.e., changes to the symmetry of velocity profiles. This suggests that the central nervous system takes into account the visual information to plan a future motion, even if this causes the adoption of nonoptimal motor plans in terms of energy consumption. However, the influence of vision on arm movement planning is not fixed, but rather changes as a function of the visual orientation of the movement. Indeed, a clear influence on motion planning can be observed only when the movement is visually presented as oriented along the vertical direction. Thus vision contributes differently to the planning of arm pointing movements depending on motion orientation in space.


2005 ◽  
Vol 94 (4) ◽  
pp. 2331-2352 ◽  
Author(s):  
O'Dhaniel A. Mullette-Gillman ◽  
Yale E. Cohen ◽  
Jennifer M. Groh

The integration of visual and auditory events is thought to require a joint representation of visual and auditory space in a common reference frame. We investigated the coding of visual and auditory space in the lateral and medial intraparietal areas (LIP, MIP) as a candidate for such a representation. We recorded the activity of 275 neurons in LIP and MIP of two monkeys while they performed saccades to a row of visual and auditory targets from three different eye positions. We found 45% of these neurons to be modulated by the locations of visual targets, 19% by auditory targets, and 9% by both visual and auditory targets. The reference frame for both visual and auditory receptive fields ranged along a continuum between eye- and head-centered reference frames with ∼10% of auditory and 33% of visual neurons having receptive fields that were more consistent with an eye- than a head-centered frame of reference and 23 and 18% having receptive fields that were more consistent with a head- than an eye-centered frame of reference, leaving a large fraction of both visual and auditory response patterns inconsistent with both head- and eye-centered reference frames. The results were similar to the reference frame we have previously found for auditory stimuli in the inferior colliculus and core auditory cortex. The correspondence between the visual and auditory receptive fields of individual neurons was weak. Nevertheless, the visual and auditory responses were sufficiently well correlated that a simple one-layer network constructed to calculate target location from the activity of the neurons in our sample performed successfully for auditory targets even though the weights were fit based only on the visual responses. We interpret these results as suggesting that although the representations of space in areas LIP and MIP are not easily described within the conventional conceptual framework of reference frames, they nevertheless process visual and auditory spatial information in a similar fashion.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Kaian Unwalla ◽  
Michelle L. Cadieux ◽  
David I. Shore

AbstractAccurate localization of touch requires the integration of two reference frames—an internal (e.g., anatomical) and an external (e.g., spatial). Using a tactile temporal order judgement task with the hands crossed over the midline, we investigated the integration of these two reference frames. We manipulated the reliability of the visual and vestibular information, both of which contribute to the external reference frame. Visual information was manipulated between experiments (Experiment 1 was done with full vision and Experiment 2 was done while wearing a blindfold). Vestibular information was manipulated in both experiments by having the two groups of participants complete the task in both an upright posture and one where they were lying down on their side. Using a Bayesian hierarchical model, we estimated the perceptual weight applied to these reference frames. Lying participants on their side reduced the weight applied to the external reference frame and produced a smaller deficit; blindfolding resulted in similar reductions. These findings reinforce the importance of the visual system when weighting tactile reference frames, and highlight the importance of the vestibular system in this integration.


2018 ◽  
Vol 15 (3) ◽  
pp. 229-236 ◽  
Author(s):  
Gennaro Ruggiero ◽  
Alessandro Iavarone ◽  
Tina Iachini

Objective: Deficits in egocentric (subject-to-object) and allocentric (object-to-object) spatial representations, with a mainly allocentric impairment, characterize the first stages of the Alzheimer's disease (AD). Methods: To identify early cognitive signs of AD conversion, some studies focused on amnestic-Mild Cognitive Impairment (aMCI) by reporting alterations in both reference frames, especially the allocentric ones. However, spatial environments in which we move need the cooperation of both reference frames. Such cooperating processes imply that we constantly switch from allocentric to egocentric frames and vice versa. This raises the question of whether alterations of switching abilities might also characterize an early cognitive marker of AD, potentially suitable to detect the conversion from aMCI to dementia. Here, we compared AD and aMCI patients with Normal Controls (NC) on the Ego-Allo- Switching spatial memory task. The task assessed the capacity to use switching (Ego-Allo, Allo-Ego) and non-switching (Ego-Ego, Allo-Allo) verbal judgments about relative distances between memorized stimuli. Results: The novel finding of this study is the neat impairment shown by aMCI and AD in switching from allocentric to egocentric reference frames. Interestingly, in aMCI when the first reference frame was egocentric, the allocentric deficit appeared attenuated. Conclusion: This led us to conclude that allocentric deficits are not always clinically detectable in aMCI since the impairments could be masked when the first reference frame was body-centred. Alongside, AD and aMCI also revealed allocentric deficits in the non-switching condition. These findings suggest that switching alterations would emerge from impairments in hippocampal and posteromedial areas and from concurrent dysregulations in the locus coeruleus-noradrenaline system or pre-frontal cortex.


Author(s):  
Steven M. Weisberg ◽  
Anjan Chatterjee

Abstract Background Reference frames ground spatial communication by mapping ambiguous language (for example, navigation: “to the left”) to properties of the speaker (using a Relative reference frame: “to my left”) or the world (Absolute reference frame: “to the north”). People’s preferences for reference frame vary depending on factors like their culture, the specific task in which they are engaged, and differences among individuals. Although most people are proficient with both reference frames, it is unknown whether preference for reference frames is stable within people or varies based on the specific spatial domain. These alternatives are difficult to adjudicate because navigation is one of few spatial domains that can be naturally solved using multiple reference frames. That is, while spatial navigation directions can be specified using Absolute or Relative reference frames (“go north” vs “go left”), other spatial domains predominantly use Relative reference frames. Here, we used two domains to test the stability of reference frame preference: one based on navigating a four-way intersection; and the other based on the sport of ultimate frisbee. We recruited 58 ultimate frisbee players to complete an online experiment. We measured reaction time and accuracy while participants solved spatial problems in each domain using verbal prompts containing either Relative or Absolute reference frames. Details of the task in both domains were kept as similar as possible while remaining ecologically plausible so that reference frame preference could emerge. Results We pre-registered a prediction that participants would be faster using their preferred reference frame type and that this advantage would correlate across domains; we did not find such a correlation. Instead, the data reveal that people use distinct reference frames in each domain. Conclusion This experiment reveals that spatial reference frame types are not stable and may be differentially suited to specific domains. This finding has broad implications for communicating spatial information by offering an important consideration for how spatial reference frames are used in communication: task constraints may affect reference frame choice as much as individual factors or culture.


2021 ◽  
pp. 003151252199304
Author(s):  
David Phillips ◽  
Albena Zahariev ◽  
Andrew Karduna

Joint position sense (JPS) is commonly evaluated using an angle replication protocol with vision occluded. However, multiple sources of sensory information are integrated when moving limbs accurately, not just proprioception. The purpose of this study was to examine different availability of vision during an active JPS protocol at the shoulder. Specifically, the effects of four conditions of vision availability were examined for three target shoulder elevation angles (50°, 70° & 90°): vision occluded continuously (P-P); vision available continuously (VP-VP); vision occluded only during target memorization (P-VP); and vision occluded only during target position replication (VP-P). There were 18 participants ( M age = 21, SD = 1 years). We used separate repeated ANOVAs to examine the effect of condition and target angle on participants’ absolute error (AE, a measure of accuracy) and constant error (CE, a measure of directional bias). We found a significant main effect for condition and angle for both dependent variables ( p < 0.01), and follow-up analysis indicated that participants were most accurate in the VP-VP condition and least accurate in the P-VP condition. Further follow-up analysis showed that accuracy improved with higher target elevation angles, consistent with previous research findings. Constant error results were similar, as there was a prominent tendency to overshoot the target. Unsurprisingly, participants performed best at the angle replication protocol with their eyes open. However, while accuracy was reduced when vision was occluded during target memorization, it was restored during target replication. This finding may have indicated an accuracy cost due to introduced noise when transforming sensory information from a proprioceptive reference frame into a visual reference frame.


Sign in / Sign up

Export Citation Format

Share Document