Movement Induces the Use of External Spatial Coordinates for Tactile Localization in Congenitally Blind Humans

2015 ◽  
Vol 28 (1-2) ◽  
pp. 173-194 ◽  
Author(s):  
Tobias Heed ◽  
Johanna Möller ◽  
Brigitte Röder

To localize touch, the brain integrates spatial information coded in anatomically based and external spatial reference frames. Sighted humans, by default, use both reference frames in tactile localization. In contrast, congenitally blind individuals have been reported to rely exclusively on anatomical coordinates, suggesting a crucial role of the visual system for tactile spatial processing. We tested whether the use of external spatial information in touch can, alternatively, be induced by a movement context. Sighted and congenitally blind humans performed a tactile temporal order judgment task that indexes the use of external coordinates for tactile localization, while they executed bimanual arm movements with uncrossed and crossed start and end postures. In the sighted, start posture and planned end posture of the arm movement modulated tactile localization for stimuli presented before and during movement, indicating automatic, external recoding of touch. Contrary to previous findings, tactile localization of congenitally blind participants, too, was affected by external coordinates, though only for stimuli presented before movement start. Furthermore, only the movement’s start posture, but not the planned end posture affected blind individuals’ tactile performance. Thus, integration of external coordinates in touch is established without vision, though more selectively than when vision has developed normally, and possibly restricted to movement contexts. The lack of modulation by the planned posture in congenitally blind participants suggests that external coordinates in this group are not mediated by motor efference copy. Instead the task-related frequent posture changes, that is, movement consequences rather than planning, appear to have induced their use of external coordinates.

2018 ◽  
Author(s):  
Jonathan T.W. Schubert ◽  
Verena N. Buchholz ◽  
Julia Föcker ◽  
Andreas K. Engel ◽  
Brigitte Röder ◽  
...  

AbstractWe investigated the function of oscillatory alpha-band activity in the neural coding of spatial information during tactile processing. Sighted humans concurrently encode tactile location in skin-based and, after integration with posture, external spatial reference frames, whereas congenitally blind humans preferably use skin-based coding. Accordingly, lateralization of alpha-band activity in parietal regions during attentional orienting in expectance of tactile stimulation reflected external spatial coding in sighted, but skin-based coding in blind humans. Here, we asked whether alpha-band activity plays a similar role in spatial coding for tactile processing, that is, after the stimulus has been received. Sighted and congenitally blind participants were cued to attend to one hand in order to detect rare tactile deviant stimuli at this hand while ignoring tactile deviants at the other hand and tactile standard stimuli at both hands. The reference frames encoded by oscillatory activity during tactile processing were probed by adopting either an uncrossed or crossed hand posture. In sighted participants, attended relative to unattended standard stimuli suppressed the power in the alpha-band over ipsilateral centro-parietal and occipital cortex. Hand crossing attenuated this attentional modulation predominantly over ipsilateral posterior-parietal cortex. In contrast, although contralateral alpha-activity was enhanced for attended versus unattended stimuli in blind participants, no crossing effects were evident in the oscillatory activity of this group. These findings suggest that oscillatory alpha-band activity plays a pivotal role in the neural coding of external spatial information for touch.


Author(s):  
Steven M. Weisberg ◽  
Anjan Chatterjee

Abstract Background Reference frames ground spatial communication by mapping ambiguous language (for example, navigation: “to the left”) to properties of the speaker (using a Relative reference frame: “to my left”) or the world (Absolute reference frame: “to the north”). People’s preferences for reference frame vary depending on factors like their culture, the specific task in which they are engaged, and differences among individuals. Although most people are proficient with both reference frames, it is unknown whether preference for reference frames is stable within people or varies based on the specific spatial domain. These alternatives are difficult to adjudicate because navigation is one of few spatial domains that can be naturally solved using multiple reference frames. That is, while spatial navigation directions can be specified using Absolute or Relative reference frames (“go north” vs “go left”), other spatial domains predominantly use Relative reference frames. Here, we used two domains to test the stability of reference frame preference: one based on navigating a four-way intersection; and the other based on the sport of ultimate frisbee. We recruited 58 ultimate frisbee players to complete an online experiment. We measured reaction time and accuracy while participants solved spatial problems in each domain using verbal prompts containing either Relative or Absolute reference frames. Details of the task in both domains were kept as similar as possible while remaining ecologically plausible so that reference frame preference could emerge. Results We pre-registered a prediction that participants would be faster using their preferred reference frame type and that this advantage would correlate across domains; we did not find such a correlation. Instead, the data reveal that people use distinct reference frames in each domain. Conclusion This experiment reveals that spatial reference frame types are not stable and may be differentially suited to specific domains. This finding has broad implications for communicating spatial information by offering an important consideration for how spatial reference frames are used in communication: task constraints may affect reference frame choice as much as individual factors or culture.


2012 ◽  
Vol 25 (0) ◽  
pp. 190
Author(s):  
Pia Ley ◽  
Davide Bottari ◽  
Bhamy Hariprasad Shenoy ◽  
Ramesh Kekunnaya ◽  
Brigitte Roeder

People with surgically removed congenital dense bilateral cataracts offer a natural model of visual deprivation and reafferentation in humans to investigate sensitive periods of multisensory development, for example regarding the recruitment of external or anatomical frames of reference for spatial representation. Here we present a single case (HS; male; 33 years; right-handed), born with congenital dense bilateral cataracts. His lenses were removed at the age of two years, but he received optical aids only at age six. At time of testing, his visual acuity was 30% in the best eye. We performed two tasks, a tactile temporal order judgment task (TOJ) in which two tactile stimuli were presented successively to the index fingers located in the two hemifields, adopting a crossed and uncrossed hand posture. The participant judged as precisely as possible which side was stimulated first. Moreover, we used a crossmodal-congruency task in which a tactile stimulus and an irrelevant visual distracter were presented simultaneously but independently to one of four positions. The participant judged the location (index or thumb) of the tactile stimulus with hands crossed or uncrossed. Speed was emphasized. In contrast to sighted controls, HS did not show a decrement of TOJ performance with hands crossed. Moreover, while the congruency gain was equivalent to sighted controls with uncrossed hands, this effect was significantly reduced with hands crossed. Thus, an external remapping of tactile stimuli still develops after a long phase of visual deprivation. However, remapping seems to be less efficient and to only take place in the context of visual stimuli.


2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Franziska Müller ◽  
Guiomar Niso ◽  
Soheila Samiee ◽  
Maurice Ptito ◽  
Sylvain Baillet ◽  
...  

AbstractIn congenitally blind individuals, the occipital cortex responds to various nonvisual inputs. Some animal studies raise the possibility that a subcortical pathway allows fast re-routing of tactile information to the occipital cortex, but this has not been shown in humans. Here we show using magnetoencephalography (MEG) that tactile stimulation produces occipital cortex activations, starting as early as 35 ms in congenitally blind individuals, but not in blindfolded sighted controls. Given our measured thalamic response latencies of 20 ms and a mean estimated lateral geniculate nucleus to primary visual cortex transfer time of 15 ms, we claim that this early occipital response is mediated by a direct thalamo-cortical pathway. We also observed stronger directed connectivity in the alpha band range from posterior thalamus to occipital cortex in congenitally blind participants. Our results strongly suggest the contribution of a fast thalamo-cortical pathway in the cross-modal activation of the occipital cortex in congenitally blind humans.


2016 ◽  
Author(s):  
Jonathan T.W. Schubert ◽  
Stephanie Badde ◽  
Brigitte Röder ◽  
Tobias Heed

ABSTRACTTask demands modulate tactile localization in sighted humans, presumably through weight adjustments in the spatial integration of anatomical, skin-based, and external, posture-based information. In contrast, previous studies have suggested that congenitally blind humans, by default, refrain from automatic spatial integration and localize touch using only skin-based information. Here, sighted and congenitally blind participants localized tactile targets on the palm or back of one hand, while ignoring simultaneous tactile distractors at congruent or incongruent locations on the other hand. We probed the interplay of anatomical and external location codes for spatial congruency effects by varying hand posture: the palms either both faced down, or one faced down and one up. In the latter posture, externally congruent target and distractor locations were anatomically incongruent and vice versa. Target locations had to be reported either anatomically (“palm” or “back” of the hand), or externally (“up” or “down” in space). Under anatomical instructions, performance was better for anatomically congruent than incongruent target-distractor pairs. In contrast, under external instructions, performance was better for externally congruent than incongruent pairs. These modulations were evident in sighted and blind individuals. Notably, distractor effects were overall far smaller in blind than in sighted participants, despite comparable target-distractor identification performance. Thus, the absence of developmental vision seems to be associated with an increased ability to focus tactile attention towards a non-spatially defined target. Nevertheless, that blind individuals exhibited effects of hand posture and task instructions in their congruency effects suggests that, like the sighted,, they automatically integrate anatomical and external information during tactile localization. Moreover, spatial integration in tactile processing is, thus, flexibly adapted by top-down information – here, task instruction – even in the absence of developmental vision.


2019 ◽  
Author(s):  
Steven Marc Weisberg ◽  
Anjan Chatterjee

Background: Reference frames ground spatial communication by mapping ambiguous language (for example, navigation: “to the left”) to properties of the speaker (using a body-based reference frame: “to my left”) or the world (environment-based reference frame: “to the north”). People’s preferences for reference frame vary depending on factors like their culture, the specific task in which they are engaged, and differences among individuals. Although most people are proficient with both reference frames, it is unknown whether preference for reference frames is stable within people or varies based on the specific spatial domain. These alternatives are difficult to adjudicate because navigation is one of few spatial domains that can be naturally solved using multiple reference frames. That is, while spatial navigation directions can be specified using environment-based or body-based reference frames (“go north” vs. “go left”), other spatial domains predominantly use body-based reference frames. Here, we used two domains to test the stability of reference frame preference – one based on navigating a four-way intersection, the other based on the sport of ultimate frisbee. We recruited 58 ultimate frisbee players to complete an online experiment. We measured reaction time and accuracy while participants solved spatial problems in each domain using verbal prompts containing either body- or environment-based reference frames. Details of the task in both domains were kept as similar as possible while remaining ecologically plausible so that reference frame preference could emerge. Results: We pre-registered a prediction that participants would be faster using their preferred reference frame type, and that this advantage would correlate across domains; we did not find such a correlation. Instead, the data reveal that people use distinct reference frames in each domain. Conclusion: This experiment reveals that spatial reference frame types are not stable and may be differentially suited to specific domains. This finding has broad implications for communicating spatial information by offering an important consideration for how spatial reference frames are used in communication: task constraints may affect reference frame choice as much as individual factors or culture.


NeuroImage ◽  
2015 ◽  
Vol 117 ◽  
pp. 417-428 ◽  
Author(s):  
Jonathan T.W. Schubert ◽  
Verena N. Buchholz ◽  
Julia Föcker ◽  
Andreas K. Engel ◽  
Brigitte Röder ◽  
...  

2019 ◽  
Author(s):  
Camille Vanderclausen ◽  
Louise Manfron ◽  
Anne De Volder ◽  
Valéry Legrain

AbstractLocalizing pain is an important process as it allows detecting which part of the body is being hurt and identifying in its surrounding which stimulus is producing the damage. Nociceptive inputs should therefore be mapped according to both somatotopic (“which limb is stimulated?”) and spatiotopic representations (“where is the stimulated limb?”). Since the limbs constantly move in space, the brain has to realign the different spatial representations, for instance when the hands are crossed and the left/right hand is in the right/left part of space, in order to adequately guide actions towards the threatening object. Such ability is thought to be dependent on past sensory experience and contextual factors. This was tested by comparing performances of early blind and normally sighted participants during nociceptive temporal order judgment tasks. The instructions prioritized either anatomy (left/right hands) or the external space (left/right hemispaces). As compared to an uncrossed hands posture, sighted participants’ performances were decreased when the hands were crossed, whatever the instructions. Early blind participants’ performances were affected by crossing the hands only during spatial instruction, but not during anatomical instruction. These results indicate that nociceptive stimuli are automatically coded according to both somatotopic and spatiotopic representations, but the integration of the different spatial reference frames would depend on early visual experience and ongoing cognitive goals, illustrating the plasticity and the flexibility of the nociceptive system.


Sign in / Sign up

Export Citation Format

Share Document