scholarly journals The influence of visual experience and cognitive goals on spatial representations of nociceptive stimuli

2019 ◽  
Author(s):  
Camille Vanderclausen ◽  
Louise Manfron ◽  
Anne De Volder ◽  
Valéry Legrain

AbstractLocalizing pain is an important process as it allows detecting which part of the body is being hurt and identifying in its surrounding which stimulus is producing the damage. Nociceptive inputs should therefore be mapped according to both somatotopic (“which limb is stimulated?”) and spatiotopic representations (“where is the stimulated limb?”). Since the limbs constantly move in space, the brain has to realign the different spatial representations, for instance when the hands are crossed and the left/right hand is in the right/left part of space, in order to adequately guide actions towards the threatening object. Such ability is thought to be dependent on past sensory experience and contextual factors. This was tested by comparing performances of early blind and normally sighted participants during nociceptive temporal order judgment tasks. The instructions prioritized either anatomy (left/right hands) or the external space (left/right hemispaces). As compared to an uncrossed hands posture, sighted participants’ performances were decreased when the hands were crossed, whatever the instructions. Early blind participants’ performances were affected by crossing the hands only during spatial instruction, but not during anatomical instruction. These results indicate that nociceptive stimuli are automatically coded according to both somatotopic and spatiotopic representations, but the integration of the different spatial reference frames would depend on early visual experience and ongoing cognitive goals, illustrating the plasticity and the flexibility of the nociceptive system.


2017 ◽  
Author(s):  
Miriam L. R. Meister ◽  
Elizabeth A. Buffalo

AbstractPrimates predominantly rely on vision to gather information from the environment, and neurons representing visual space and gaze position are found in many brain areas. Within the medial temporal lobe, a brain region critical for memory, neurons in the entorhinal cortex of macaque monkeys exhibit spatial selectivity for gaze position. Specifically, the firing rate of single neurons reflects fixation location within a visual image (Killian et al., 2012). In the rodents, entorhinal cells such as grid cells, border cells, and head direction cells show spatial representations aligned to visual environmental features instead of the body (Hafting et al., 2005, Solstad et al. 2008, Sargolini et al., 2006, Diehl et al., 2017). However, it is not known whether similar allocentric representations exist in primate entorhinal cortex. Here, we recorded neural activity in the entorhinal cortex in two male rhesus monkeys during a naturalistic, free-viewing task. Our data reveal that a majority of entorhinal neurons represent gaze position, and that simultaneously recorded neurons exhibit distinct spatial reference frames, with some neurons aligning to the visual image and others aligning to the monkey’s head position. Our results also show that entorhinal neural activity can be used to predict gaze position with a high degree of accuracy. These findings demonstrate that visuospatial representation is a fundamental property of entorhinal neurons in primates, and suggest that entorhinal cortex may support relational memory and motor planning by coding attentional locus in distinct, behaviorally relevant frames of reference.Significance StatementThe entorhinal cortex, a brain area important for memory, shows striking spatial activity in rodents through grid cells, border cells, head direction cells, and nongrid spatial cells. The majority of entorhinal neurons signal the location of a rodent relative to visual environmental cues, representing the location of the animal relative to space in the world instead of the body. Recently, our laboratory found that entorhinal neurons can signal location of gaze while a monkey visually explores images. Here, we report that spatial entorhinal neurons are widespread in the monkey, and these neurons are capable of showing a world-based spatial reference frame locked to the bounds of explored images. These results help connect the extensive findings in rodents to the primate.



2021 ◽  
pp. 1-32
Author(s):  
Kaian Unwalla ◽  
Daniel Goldreich ◽  
David I. Shore

Abstract Exploring the world through touch requires the integration of internal (e.g., anatomical) and external (e.g., spatial) reference frames — you only know what you touch when you know where your hands are in space. The deficit observed in tactile temporal-order judgements when the hands are crossed over the midline provides one tool to explore this integration. We used foot pedals and required participants to focus on either the hand that was stimulated first (an anatomical bias condition) or the location of the hand that was stimulated first (a spatiotopic bias condition). Spatiotopic-based responses produce a larger crossed-hands deficit, presumably by focusing observers on the external reference frame. In contrast, anatomical-based responses focus the observer on the internal reference frame and produce a smaller deficit. This manipulation thus provides evidence that observers can change the relative weight given to each reference frame. We quantify this effect using a probabilistic model that produces a population estimate of the relative weight given to each reference frame. We show that a spatiotopic bias can result in either a larger external weight (Experiment 1) or a smaller internal weight (Experiment 2) and provide an explanation of when each one would occur.



2017 ◽  
Author(s):  
Virginie Crollen ◽  
Latifa Lazzouni ◽  
Mohamed Rezk ◽  
Antoine Bellemare ◽  
Franco Lepore ◽  
...  

AbstractLocalizing touch relies on the activation of skin-based and externally defined spatial frames of references. Psychophysical studies have demonstrated that early visual deprivation prevents the automatic remapping of touch into external space. We used fMRI to characterize how visual experience impacts on the brain circuits dedicated to the spatial processing of touch. Sighted and congenitally blind humans (male and female) performed a tactile temporal order judgment (TOJ) task, either with the hands uncrossed or crossed over the body midline. Behavioral data confirmed that crossing the hands has a detrimental effect on TOJ judgments in sighted but not in blind. Crucially, the crossed hand posture elicited more activity in a fronto-parietal network in the sighted group only. Psychophysiological interaction analysis revealed that the congenitally blind showed enhanced functional connectivity between parietal and frontal regions in the crossed versus uncrossed hand postures. Our results demonstrate that visual experience scaffolds the neural implementation of touch perception.Significance statementAlthough we seamlessly localize tactile events in our daily life, it is not a trivial operation because the hands move constantly within the peripersonal space. To process touch correctly, the brain has therefore to take the current position of the limbs into account and remap them to their location in the external world. In sighted, parietal and premotor areas support this process. However, while visual experience has been suggested to support the implementation of the automatic external remapping of touch, no studies so far have investigated how early visual deprivation alters the brain network supporting touch localization. Examining this question is therefore crucial to conclusively determine the intrinsic role vision plays in scaffolding the neural implementation of touch perception.



Perception ◽  
2021 ◽  
Vol 50 (4) ◽  
pp. 294-307
Author(s):  
Louise Manfron ◽  
Camille Vanderclausen ◽  
Valéry Legrain

Localizing somatosensory stimuli is an important process, as it allows us to spatially guide our actions toward the object entering in contact with the body. Accordingly, the positions of tactile inputs are coded according to both somatotopic and spatiotopic representations, the latter one considering the position of the stimulated limbs in external space. The spatiotopic representation has often been evidenced by means of temporal order judgment (TOJ) tasks. Participants’ judgments about the order of appearance of two successive somatosensory stimuli are less accurate when the hands are crossed over the body midline than uncrossed but also when participants’ hands are placed close together when compared with farther away. Moreover, these postural effects might depend on the vision of the stimulated limbs. The aim of this study was to test the influence of seeing the hands, on the modulation of tactile TOJ by the spatial distance between the stimulated limbs. The results showed no influence of the distance between the stimulated hands on TOJ performance and prevent us from concluding whether vision of the hands affects TOJ performance, or whether these variables interact. The reliability of such distance effect to investigate the spatial representations of tactile inputs is questioned.



2008 ◽  
Vol 70 (6) ◽  
pp. 1068-1080 ◽  
Author(s):  
S. E. AVONS ◽  
K. OSWALD


2008 ◽  
Vol 20 (1) ◽  
pp. 1-19 ◽  
Author(s):  
Jeffrey M. Zacks

Mental rotation is a hypothesized imagery process that has inspired controversy regarding the substrate of human spatial reasoning. Two central questions about mental rotation remain: Does mental rotation depend on analog spatial representations, and does mental rotation depend on motor simulation? A review and meta-analysis of neuroimaging studies help answer these questions. Mental rotation is accompanied by increased activity in the intraparietal sulcus and adjacent regions. These areas contain spatially mapped representations, and activity in these areas is modulated by parametric manipulations of mental rotation tasks, supporting the view that mental rotation depends on analog representations. Mental rotation also is accompanied by activity in the medial superior precentral cortex, particularly under conditions that favor motor simulation, supporting the view that mental rotation depends on motor simulation in some situations. The relationship between mental rotation and motor simulation can be understood in terms of how these two processes update spatial reference frames.





Author(s):  
Steven M. Weisberg ◽  
Anjan Chatterjee

Abstract Background Reference frames ground spatial communication by mapping ambiguous language (for example, navigation: “to the left”) to properties of the speaker (using a Relative reference frame: “to my left”) or the world (Absolute reference frame: “to the north”). People’s preferences for reference frame vary depending on factors like their culture, the specific task in which they are engaged, and differences among individuals. Although most people are proficient with both reference frames, it is unknown whether preference for reference frames is stable within people or varies based on the specific spatial domain. These alternatives are difficult to adjudicate because navigation is one of few spatial domains that can be naturally solved using multiple reference frames. That is, while spatial navigation directions can be specified using Absolute or Relative reference frames (“go north” vs “go left”), other spatial domains predominantly use Relative reference frames. Here, we used two domains to test the stability of reference frame preference: one based on navigating a four-way intersection; and the other based on the sport of ultimate frisbee. We recruited 58 ultimate frisbee players to complete an online experiment. We measured reaction time and accuracy while participants solved spatial problems in each domain using verbal prompts containing either Relative or Absolute reference frames. Details of the task in both domains were kept as similar as possible while remaining ecologically plausible so that reference frame preference could emerge. Results We pre-registered a prediction that participants would be faster using their preferred reference frame type and that this advantage would correlate across domains; we did not find such a correlation. Instead, the data reveal that people use distinct reference frames in each domain. Conclusion This experiment reveals that spatial reference frame types are not stable and may be differentially suited to specific domains. This finding has broad implications for communicating spatial information by offering an important consideration for how spatial reference frames are used in communication: task constraints may affect reference frame choice as much as individual factors or culture.



Sign in / Sign up

Export Citation Format

Share Document