allocentric frame
Recently Published Documents


TOTAL DOCUMENTS

8
(FIVE YEARS 1)

H-INDEX

3
(FIVE YEARS 0)

Author(s):  
Alice Bollini ◽  
Davide Esposito ◽  
Claudio Campus ◽  
Monica Gori

AbstractThe human brain creates an external world representation based on magnitude judgments by estimating distance, numerosity, or size. The magnitude and spatial representation are hypothesized to rely on common mechanisms shared by different sensory modalities. We explored the relationship between magnitude and spatial representation using two different sensory systems. We hypothesize that the interaction between space and magnitude is combined differently depending on sensory modalities. Furthermore, we aimed to understand the role of the spatial reference frame in magnitude representation. We used stimulus–response compatibility (SRC) to investigate these processes assuming that performance is improved if stimulus and response share common features. We designed an auditory and tactile SRC task with conflicting spatial and magnitude mapping. Our results showed that sensory modality modulates the relationship between space and magnitude. A larger effect of magnitude over spatial congruency occurred in a tactile task. However, magnitude and space showed similar weight in the auditory task, with neither spatial congruency nor magnitude congruency having a significant effect. Moreover, we observed that the spatial frame activated during tasks was elicited by the sensory inputs. The participants' performance was reversed in the tactile task between uncrossed and crossed hands posture, suggesting an internal coordinate system. In contrast, crossing the hands did not alter performance (i.e., using an allocentric frame of reference). Overall, these results suggest that space and magnitude interaction differ in auditory and tactile modalities, supporting the idea that these sensory modalities use different magnitude and spatial representation mechanisms.



2019 ◽  
Author(s):  
James Negen ◽  
Laura Bird ◽  
Marko Nardini

AbstractAfter becoming disoriented, an organism must use the local environment to reorient and recover vectors to important locations. Debates over how this happens have been extensive. A new theory, Adaptive Combination, suggests that the information from different spatial cues are combined with Bayesian efficiency. To test this further, we modified the standard reorientation paradigm to be more amenable to Bayesian cue combination analyses while still requiring reorientation, still requiring participants to recall goal locations from memory, and focusing on situations that require the use of the allocentric (world-based; not egocentric) frame. 12 adults and 20 children at 5-7 years old were asked to recall locations in a virtual environment after a disorientation. They could use either a pair of landmarks at the North and South, a pair at the East and West, or both. Results were not consistent with Adaptive Combination. Instead, they are consistent with the use of the most useful (nearest) single landmark in isolation. We term this Adaptive Selection. Experiment 2 suggests that adults also use the Adaptive Selection method when they are not disoriented but still required to use a local allocentric frame. This suggests that the process of recalling a location in the allocentric frame is typically guided by the single most useful landmark, rather than a Bayesian combination of landmarks – regardless of whether the use of the allocentric frame is forced by disorientation or another method. These failures to benefit from a Bayesian strategy accord with the broad idea that there are important limits to Bayesian theories of the cognition, particularly for complex tasks such as allocentric recall.



2018 ◽  
Author(s):  
Andrew D Wilson ◽  
Shaochen Huang ◽  
Qin Zhu ◽  
Geoffrey P Bingham

The stability of coordinated rhythmic movements is primarily affected by the target relative phase. Relative phase can be identified in each of two frames of reference (an external, allocentric frame and a body-centred, egocentric frame) and both constrain stability. In the allocentric frame, coordinations that involve isodirectional movement (0° mean relative phase) are more stable than those that do not. In the egocentric frame, coordinations that involve simultaneous use of homologous muscles (in-phase) are more stable than those that do not. The origin of the allocentric constraint is the visual perception of relative phase. The origin of this egocentric frame of reference is still unclear, although it is typically discussed in terms of neural crosstalk. Pickavance, Azmoodeh & Wilson (2018) proposed that the egocentric constraint is also perceptual, based in the haptic perception of relative phase. As an initial step in pursuing this hypothesis, this exploratory report examines some data from two recent studies on the effect of ageing on performing and learning coordinated rhythmic movements. We show that participants in their 20s show a strong egocentric effect in their coordination production, while this disappears in participants in their 60s. Participants in their 50s show an intermediate effect. We propose that a perceptual hypothesis is the best explanation of this age-related change, and lay out how to pursue hypothesis-driven tests in the future.



2018 ◽  
Author(s):  
John Patrick Pickavance ◽  
Arianne Azmoodeh ◽  
Andrew D Wilson

The stability of coordinated rhythmic movement is primarily affected by the required mean relative phase. In general, symmetrical coordination is more stable than asymmetrical coordination; however, there are two ways to define relative phase and the associated symmetries. The first is in an egocentric frame of reference, with symmetry defined relative to the sagittal plane down the midline of the body. The second is in an allocentric frame of reference, with symmetry defined in terms of the relative direction of motion. Experiments designed to separate these constraints have shown that both egocentric and allocentric constraints contribute to overall coordination stability, with the former typically showing larger effects. However, separating these constraints has meant comparing movements made either in different planes of motion, or by limbs in different postures. In addition, allocentric information about the coordination is either in the form of the actual limb motion, or a transformed, Lissajous feedback display. These factors limit both the comparisons that can be made and the interpretations of these comparisons. The current study examined the effects of egocentric relative phase, allocentric relative phase, and allocentric feedback format on coordination stability in a single task. We found that while all three independently contributed to stability, the egocentric constraint dominated. This supports previous work. We examine the evidence underpinning theoretical explanations for the egocentric constraint, and describe how it may reflect the haptic perception of relative phase.





2003 ◽  
Vol 14 (4) ◽  
pp. 340-346 ◽  
Author(s):  
Mark Wexler

Although visual input is egocentric, at least some visual perceptions and representations are allocentric, that is, independent of the observer's vantage point or motion. Three experiments investigated the visual perception of three-dimensional object motion during voluntary and involuntary motion in human subjects. The results show that the motor command contributes to the objective perception of space: Observers are more likely to apply, consciously and unconsciously, spatial criteria relative to an allocentric frame of reference when they are executing voluntary head movements than while they are undergoing similar involuntary displacements (which lead to a more egocentric bias). Furthermore, details of the motor command are crucial to spatial vision, as allocentric bias decreases or disappears when self-motion and motor command do not match.



2003 ◽  
Vol 29 (3) ◽  
pp. 319-333 ◽  
Author(s):  
Martin Lemay ◽  
Luc Proteau




Sign in / Sign up

Export Citation Format

Share Document