spatial reference frames
Recently Published Documents


TOTAL DOCUMENTS

104
(FIVE YEARS 17)

H-INDEX

21
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Jie Huang ◽  
Xiaoyu Tang ◽  
Aijun Wang ◽  
Ming Zhang

Abstract Neuropsychological studies have demonstrated that the preferential processing of near-space and egocentric representation is associated with the self-prioritization effect (SPE). However, relatively little is known concerning whether the SPE is superior to the representation of egocentric frames or near-space processing in the interaction between spatial reference frames and spatial domains. The present study adopted the variant of the shape-label matching task (i.e., color-label) to establish an SPE, combined with a spatial reference frame judgment task, to examine how the SPE leads to preferential processing of near-space or egocentric representations. Surface-based morphometry analysis was also adopted to extract the cortical thickness of the ventral medial prefrontal cortex (vmPFC) to examine whether it could predict differences in the SPE at the behavioral level. The results showed a significant SPE, manifested as the response of self-associated color being faster than that of stranger-associated color. Additionally, the SPE showed a preference for near-space processing, followed by egocentric representation. More importantly, the thickness of the vmPFC could predict the difference in the SPE on reference frames, particularly in the left frontal pole cortex and bilateral rostral anterior cingulate cortex. These findings indicated that the SPE showed a prior entry effect for information at the spatial level relative to the reference frame level, providing evidence to support the structural significance of the self-processing region. The present study also further clarified the priority in SPE processing and role of the SPE within the real spatial domain.


2021 ◽  
Author(s):  
Benjamin Pitt ◽  
Alexandra Carstensen ◽  
Isabelle Boni ◽  
Steven T. Piantadosi ◽  
Edward Gibson

The physical properties of space may be universal, but the way people conceptualize space is variable. In some groups, people tend to use egocentric space (e.g. left, right) to encode the locations of objects, while in other groups, people encode the same spatial scene using allocentric space (e.g. upriver, downriver). These different spatial frames of reference (FoRs) characterize the way people talk about spatial relations and the way they think about them, even when they are not using language. Although spatial language and spatial reasoning tend to covary, the root causes of this variation are unclear. Here we propose that variation in FoR use partly reflects the discriminability of the relevant spatial continua. In an initial test of this proposal in a group of indigenous Bolivians, we compared FoR use across spatial axes that are known to differ in discriminability. In both verbal and nonverbal tests, participants spontaneously used different FoRs on different spatial axes: On the lateral axis, where egocentric (left-right) discrimination is difficult, their spatial behavior and language was predominantly allocentric; on the sagittal axis, where egocentric (front-back) discrimination is relatively easy, they were predominantly egocentric. These findings challenge the claim that each language group can be characterized by a predominant spatial frame of reference. Rather, both spatial memory and language can differ categorically across axes, even within the same individuals. We suggest that differences in spatial discrimination can explain differences in both spatial memory and language within and across human groups.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Klaus Gramann ◽  
Friederike U. Hohlefeld ◽  
Lukas Gehrke ◽  
Marius Klug

AbstractThe retrosplenial complex (RSC) plays a crucial role in spatial orientation by computing heading direction and translating between distinct spatial reference frames based on multi-sensory information. While invasive studies allow investigating heading computation in moving animals, established non-invasive analyses of human brain dynamics are restricted to stationary setups. To investigate the role of the RSC in heading computation of actively moving humans, we used a Mobile Brain/Body Imaging approach synchronizing electroencephalography with motion capture and virtual reality. Data from physically rotating participants were contrasted with rotations based only on visual flow. During physical rotation, varying rotation velocities were accompanied by pronounced wide frequency band synchronization in RSC, the parietal and occipital cortices. In contrast, the visual flow rotation condition was associated with pronounced alpha band desynchronization, replicating previous findings in desktop navigation studies, and notably absent during physical rotation. These results suggest an involvement of the human RSC in heading computation based on visual, vestibular, and proprioceptive input and implicate revisiting traditional findings of alpha desynchronization in areas of the navigation network during spatial orientation in movement-restricted participants.


2021 ◽  
pp. 1-32
Author(s):  
Kaian Unwalla ◽  
Daniel Goldreich ◽  
David I. Shore

Abstract Exploring the world through touch requires the integration of internal (e.g., anatomical) and external (e.g., spatial) reference frames — you only know what you touch when you know where your hands are in space. The deficit observed in tactile temporal-order judgements when the hands are crossed over the midline provides one tool to explore this integration. We used foot pedals and required participants to focus on either the hand that was stimulated first (an anatomical bias condition) or the location of the hand that was stimulated first (a spatiotopic bias condition). Spatiotopic-based responses produce a larger crossed-hands deficit, presumably by focusing observers on the external reference frame. In contrast, anatomical-based responses focus the observer on the internal reference frame and produce a smaller deficit. This manipulation thus provides evidence that observers can change the relative weight given to each reference frame. We quantify this effect using a probabilistic model that produces a population estimate of the relative weight given to each reference frame. We show that a spatiotopic bias can result in either a larger external weight (Experiment 1) or a smaller internal weight (Experiment 2) and provide an explanation of when each one would occur.


PLoS ONE ◽  
2021 ◽  
Vol 16 (5) ◽  
pp. e0251827
Author(s):  
David Mark Watson ◽  
Michael A. Akeroyd ◽  
Neil W. Roach ◽  
Ben S. Webb

In dynamic multisensory environments, the perceptual system corrects for discrepancies arising between modalities. For instance, in the ventriloquism aftereffect (VAE), spatial disparities introduced between visual and auditory stimuli lead to a perceptual recalibration of auditory space. Previous research has shown that the VAE is underpinned by multiple recalibration mechanisms tuned to different timescales, however it remains unclear whether these mechanisms use common or distinct spatial reference frames. Here we asked whether the VAE operates in eye- or head-centred reference frames across a range of adaptation timescales, from a few seconds to a few minutes. We developed a novel paradigm for selectively manipulating the contribution of eye- versus head-centred visual signals to the VAE by manipulating auditory locations relative to either the head orientation or the point of fixation. Consistent with previous research, we found both eye- and head-centred frames contributed to the VAE across all timescales. However, we found no evidence for an interaction between spatial reference frames and adaptation duration. Our results indicate that the VAE is underpinned by multiple spatial reference frames that are similarly leveraged by the underlying time-sensitive mechanisms.


2021 ◽  
pp. 1-21
Author(s):  
Tsukasa Kimura

Abstract Interaction with other sensory information is important for prediction of tactile events. Recent studies have reported that the approach of visual information toward the body facilitates prediction of subsequent tactile events. However, the processing of tactile events is influenced by multiple spatial coordinates, and it remains unclear how this approach effect influences tactile events in different spatial coordinates, i.e., spatial reference frames. We investigated the relationship between the prediction of a tactile stimulus via this approach effect and spatial coordinates by comparing ERPs. Participants were asked to place their arms on a desk and required to respond tactile stimuli which were presented to the left (or right) index finger with a high probability (80%) or to the opposite index finger with a low probability (20%). Before the presentation of each tactile stimulus, visual stimuli approached sequentially toward the hand to which the high-probability tactile stimulus was presented. In the uncrossed condition, each hand was placed on the corresponding side. In the crossed condition, each hand was crossed and placed on the opposite side, i.e., left (right) hand placed on the right (left) side. Thus, the spatial location of the tactile stimulus and hand was consistent in the uncrossed condition and inconsistent in the crossed condition. The results showed that N1 amplitudes elicited by high-probability tactile stimuli only decreased in the uncrossed condition. These results suggest that the prediction of a tactile stimulus facilitated by approaching visual information is influenced by multiple spatial coordinates.


2021 ◽  
Author(s):  
Che-Sheng Yang ◽  
Jia Liu ◽  
Avinash Singh ◽  
Kuan-Chih Huang ◽  
Chin-Teng Lin

Recent research into navigation strategy of different spatial reference frame proclivities (RFPs) has revealed that the parietal cortex plays an important role in processing allocentric information to provide a translation function between egocentric and allocentric spatial reference frames. However, most studies merely focused on a passive experimental environment, which is not truly representative of our daily spatial learning/navigation tasks. This study investigated the factor associated with brain dynamics that causes people to switch their preferred spatial strategy in different environments in virtual reality (VR) based active navigation task to bridge the gap. High-resolution electroencephalography (EEG) signals were recorded to monitor spectral perturbations on transitions between egocentric and allocentric frames during a path integration task. Our brain dynamics results showed navigation involved areas including the parietal cortex with modulation in the alpha band, the occipital cortex with beta and low gamma band perturbations, and the frontal cortex with theta perturbation. Differences were found between two different turning-angle paths in the alpha band in parietal cluster event-related spectral perturbations (ERSPs). In small turning-angle paths, allocentric participants showed stronger alpha desynchronization than egocentric participants; in large turning-angle paths, participants for two reference frames had a smaller difference in the alpha frequency band. Behavior results of homing errors also corresponded to brain dynamic results, indicating that a larger angle path caused the allocentric to have a higher tendency to become egocentric navigators in the active navigation environment.


Author(s):  
Steven M. Weisberg ◽  
Anjan Chatterjee

Abstract Background Reference frames ground spatial communication by mapping ambiguous language (for example, navigation: “to the left”) to properties of the speaker (using a Relative reference frame: “to my left”) or the world (Absolute reference frame: “to the north”). People’s preferences for reference frame vary depending on factors like their culture, the specific task in which they are engaged, and differences among individuals. Although most people are proficient with both reference frames, it is unknown whether preference for reference frames is stable within people or varies based on the specific spatial domain. These alternatives are difficult to adjudicate because navigation is one of few spatial domains that can be naturally solved using multiple reference frames. That is, while spatial navigation directions can be specified using Absolute or Relative reference frames (“go north” vs “go left”), other spatial domains predominantly use Relative reference frames. Here, we used two domains to test the stability of reference frame preference: one based on navigating a four-way intersection; and the other based on the sport of ultimate frisbee. We recruited 58 ultimate frisbee players to complete an online experiment. We measured reaction time and accuracy while participants solved spatial problems in each domain using verbal prompts containing either Relative or Absolute reference frames. Details of the task in both domains were kept as similar as possible while remaining ecologically plausible so that reference frame preference could emerge. Results We pre-registered a prediction that participants would be faster using their preferred reference frame type and that this advantage would correlate across domains; we did not find such a correlation. Instead, the data reveal that people use distinct reference frames in each domain. Conclusion This experiment reveals that spatial reference frame types are not stable and may be differentially suited to specific domains. This finding has broad implications for communicating spatial information by offering an important consideration for how spatial reference frames are used in communication: task constraints may affect reference frame choice as much as individual factors or culture.


Cognition ◽  
2020 ◽  
Vol 204 ◽  
pp. 104349
Author(s):  
Matthew R. Longo ◽  
Sampath S. Rajapakse ◽  
Adrian J.T. Alsmith ◽  
Elisa R. Ferrè

2020 ◽  
pp. 787-801
Author(s):  
S MORARESKU ◽  
K VLCEK

The dissociation between egocentric and allocentric reference frames is well established. Spatial coding relative to oneself has been associated with a brain network distinct from spatial coding using a cognitive map independently of the actual position. These differences were, however, revealed by a variety of tasks from both static conditions, using a series of images, and dynamic conditions, using movements through space. We aimed to clarify how these paradigms correspond to each other concerning the neural correlates of the use of egocentric and allocentric reference frames. We review here studies of allocentric and egocentric judgments used in static two- and three-dimensional tasks and compare their results with the findings from spatial navigation studies. We argue that neural correlates of allocentric coding in static conditions but using complex three-dimensional scenes and involving spatial memory of participants resemble those in spatial navigation studies, while allocentric representations in two-dimensional tasks are connected with other perceptual and attentional processes. In contrast, the brain networks associated with the egocentric reference frame in static two-dimensional and three-dimensional tasks and spatial navigation tasks are, with some limitations, more similar. Our review demonstrates the heterogeneity of experimental designs focused on spatial reference frames. At the same time, it indicates similarities in brain activation during reference frame use despite this heterogeneity.


Sign in / Sign up

Export Citation Format

Share Document