scholarly journals Alpha-band oscillations reflect external spatial coding for tactile stimuli in sighted, but not in congenitally blind humans

2018 ◽  
Author(s):  
Jonathan T.W. Schubert ◽  
Verena N. Buchholz ◽  
Julia Föcker ◽  
Andreas K. Engel ◽  
Brigitte Röder ◽  
...  

AbstractWe investigated the function of oscillatory alpha-band activity in the neural coding of spatial information during tactile processing. Sighted humans concurrently encode tactile location in skin-based and, after integration with posture, external spatial reference frames, whereas congenitally blind humans preferably use skin-based coding. Accordingly, lateralization of alpha-band activity in parietal regions during attentional orienting in expectance of tactile stimulation reflected external spatial coding in sighted, but skin-based coding in blind humans. Here, we asked whether alpha-band activity plays a similar role in spatial coding for tactile processing, that is, after the stimulus has been received. Sighted and congenitally blind participants were cued to attend to one hand in order to detect rare tactile deviant stimuli at this hand while ignoring tactile deviants at the other hand and tactile standard stimuli at both hands. The reference frames encoded by oscillatory activity during tactile processing were probed by adopting either an uncrossed or crossed hand posture. In sighted participants, attended relative to unattended standard stimuli suppressed the power in the alpha-band over ipsilateral centro-parietal and occipital cortex. Hand crossing attenuated this attentional modulation predominantly over ipsilateral posterior-parietal cortex. In contrast, although contralateral alpha-activity was enhanced for attended versus unattended stimuli in blind participants, no crossing effects were evident in the oscillatory activity of this group. These findings suggest that oscillatory alpha-band activity plays a pivotal role in the neural coding of external spatial information for touch.

2015 ◽  
Vol 28 (1-2) ◽  
pp. 173-194 ◽  
Author(s):  
Tobias Heed ◽  
Johanna Möller ◽  
Brigitte Röder

To localize touch, the brain integrates spatial information coded in anatomically based and external spatial reference frames. Sighted humans, by default, use both reference frames in tactile localization. In contrast, congenitally blind individuals have been reported to rely exclusively on anatomical coordinates, suggesting a crucial role of the visual system for tactile spatial processing. We tested whether the use of external spatial information in touch can, alternatively, be induced by a movement context. Sighted and congenitally blind humans performed a tactile temporal order judgment task that indexes the use of external coordinates for tactile localization, while they executed bimanual arm movements with uncrossed and crossed start and end postures. In the sighted, start posture and planned end posture of the arm movement modulated tactile localization for stimuli presented before and during movement, indicating automatic, external recoding of touch. Contrary to previous findings, tactile localization of congenitally blind participants, too, was affected by external coordinates, though only for stimuli presented before movement start. Furthermore, only the movement’s start posture, but not the planned end posture affected blind individuals’ tactile performance. Thus, integration of external coordinates in touch is established without vision, though more selectively than when vision has developed normally, and possibly restricted to movement contexts. The lack of modulation by the planned posture in congenitally blind participants suggests that external coordinates in this group are not mediated by motor efference copy. Instead the task-related frequent posture changes, that is, movement consequences rather than planning, appear to have induced their use of external coordinates.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Jonathan T. W. Schubert ◽  
Verena N. Buchholz ◽  
Julia Föcker ◽  
Andreas K. Engel ◽  
Brigitte Röder ◽  
...  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Blake W. Saurels ◽  
Wiremu Hohaia ◽  
Kielan Yarrow ◽  
Alan Johnston ◽  
Derek H. Arnold

AbstractPrediction is a core function of the human visual system. Contemporary research suggests the brain builds predictive internal models of the world to facilitate interactions with our dynamic environment. Here, we wanted to examine the behavioural and neurological consequences of disrupting a core property of peoples’ internal models, using naturalistic stimuli. We had people view videos of basketball and asked them to track the moving ball and predict jump shot outcomes, all while we recorded eye movements and brain activity. To disrupt people’s predictive internal models, we inverted footage on half the trials, so dynamics were inconsistent with how movements should be shaped by gravity. When viewing upright videos people were better at predicting shot outcomes, at tracking the ball position, and they had enhanced alpha-band oscillatory activity in occipital brain regions. The advantage for predicting upright shot outcomes scaled with improvements in ball tracking and occipital alpha-band activity. Occipital alpha-band activity has been linked to selective attention and spatially-mapped inhibitions of visual brain activity. We propose that when people have a more accurate predictive model of the environment, they can more easily parse what is relevant, allowing them to better target irrelevant positions for suppression—resulting in both better predictive performance and in neural markers of inhibited information processing.


Author(s):  
Steven M. Weisberg ◽  
Anjan Chatterjee

Abstract Background Reference frames ground spatial communication by mapping ambiguous language (for example, navigation: “to the left”) to properties of the speaker (using a Relative reference frame: “to my left”) or the world (Absolute reference frame: “to the north”). People’s preferences for reference frame vary depending on factors like their culture, the specific task in which they are engaged, and differences among individuals. Although most people are proficient with both reference frames, it is unknown whether preference for reference frames is stable within people or varies based on the specific spatial domain. These alternatives are difficult to adjudicate because navigation is one of few spatial domains that can be naturally solved using multiple reference frames. That is, while spatial navigation directions can be specified using Absolute or Relative reference frames (“go north” vs “go left”), other spatial domains predominantly use Relative reference frames. Here, we used two domains to test the stability of reference frame preference: one based on navigating a four-way intersection; and the other based on the sport of ultimate frisbee. We recruited 58 ultimate frisbee players to complete an online experiment. We measured reaction time and accuracy while participants solved spatial problems in each domain using verbal prompts containing either Relative or Absolute reference frames. Details of the task in both domains were kept as similar as possible while remaining ecologically plausible so that reference frame preference could emerge. Results We pre-registered a prediction that participants would be faster using their preferred reference frame type and that this advantage would correlate across domains; we did not find such a correlation. Instead, the data reveal that people use distinct reference frames in each domain. Conclusion This experiment reveals that spatial reference frame types are not stable and may be differentially suited to specific domains. This finding has broad implications for communicating spatial information by offering an important consideration for how spatial reference frames are used in communication: task constraints may affect reference frame choice as much as individual factors or culture.


2020 ◽  
pp. 787-801
Author(s):  
S MORARESKU ◽  
K VLCEK

The dissociation between egocentric and allocentric reference frames is well established. Spatial coding relative to oneself has been associated with a brain network distinct from spatial coding using a cognitive map independently of the actual position. These differences were, however, revealed by a variety of tasks from both static conditions, using a series of images, and dynamic conditions, using movements through space. We aimed to clarify how these paradigms correspond to each other concerning the neural correlates of the use of egocentric and allocentric reference frames. We review here studies of allocentric and egocentric judgments used in static two- and three-dimensional tasks and compare their results with the findings from spatial navigation studies. We argue that neural correlates of allocentric coding in static conditions but using complex three-dimensional scenes and involving spatial memory of participants resemble those in spatial navigation studies, while allocentric representations in two-dimensional tasks are connected with other perceptual and attentional processes. In contrast, the brain networks associated with the egocentric reference frame in static two-dimensional and three-dimensional tasks and spatial navigation tasks are, with some limitations, more similar. Our review demonstrates the heterogeneity of experimental designs focused on spatial reference frames. At the same time, it indicates similarities in brain activation during reference frame use despite this heterogeneity.


2016 ◽  
Vol 28 (12) ◽  
pp. 1964-1979 ◽  
Author(s):  
Marlies E. Vissers ◽  
Joram van Driel ◽  
Heleen A. Slagter

Filter mechanisms that prevent irrelevant information from consuming the limited storage capacity of visual STM are critical for goal-directed behavior. Alpha oscillatory activity has been related to proactive filtering of anticipated distraction. Yet, distraction in everyday life is not always anticipated, necessitating rapid, reactive filtering mechanisms. Currently, the oscillatory mechanisms underlying reactive distractor filtering remain unclear. In the current EEG study, we investigated whether reactive filtering of distractors also relies on alpha-band oscillatory mechanisms and explored possible contributions by oscillations in other frequency bands. To this end, participants performed a lateralized change detection task in which a varying and unpredicted number of distractors were presented both in the relevant hemifield, among targets, and in the irrelevant hemifield. Results showed that, whereas proactive distractor filtering was accompanied by lateralization of alpha-band activity over posterior scalp regions, reactive distractor filtering was not associated with modulations of oscillatory power in any frequency band. Yet, behavioral and post hoc ERP analyses clearly showed that participants selectively encoded relevant information. On the basis of these results, we conclude that reactive distractor filtering may not be realized through local modulation of alpha-band oscillatory activity.


2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Franziska Müller ◽  
Guiomar Niso ◽  
Soheila Samiee ◽  
Maurice Ptito ◽  
Sylvain Baillet ◽  
...  

AbstractIn congenitally blind individuals, the occipital cortex responds to various nonvisual inputs. Some animal studies raise the possibility that a subcortical pathway allows fast re-routing of tactile information to the occipital cortex, but this has not been shown in humans. Here we show using magnetoencephalography (MEG) that tactile stimulation produces occipital cortex activations, starting as early as 35 ms in congenitally blind individuals, but not in blindfolded sighted controls. Given our measured thalamic response latencies of 20 ms and a mean estimated lateral geniculate nucleus to primary visual cortex transfer time of 15 ms, we claim that this early occipital response is mediated by a direct thalamo-cortical pathway. We also observed stronger directed connectivity in the alpha band range from posterior thalamus to occipital cortex in congenitally blind participants. Our results strongly suggest the contribution of a fast thalamo-cortical pathway in the cross-modal activation of the occipital cortex in congenitally blind humans.


2015 ◽  
Vol 113 (5) ◽  
pp. 1574-1584 ◽  
Author(s):  
T. P. Gutteling ◽  
L. P. J. Selen ◽  
W. P. Medendorp

Despite the constantly changing retinal image due to eye, head, and body movements, we are able to maintain a stable representation of the visual environment. Various studies on retinal image shifts caused by saccades have suggested that occipital and parietal areas correct for these perturbations by a gaze-centered remapping of the neural image. However, such a uniform, rotational, remapping mechanism cannot work during translations when objects shift on the retina in a more complex, depth-dependent fashion due to motion parallax. Here we tested whether the brain's activity patterns show parallax-sensitive remapping of remembered visual space during whole-body motion. Under continuous recording of electroencephalography (EEG), we passively translated human subjects while they had to remember the location of a world-fixed visual target, briefly presented in front of or behind the eyes' fixation point prior to the motion. Using a psychometric approach we assessed the quality of the memory update, which had to be made based on vestibular feedback and other extraretinal motion cues. All subjects showed a variable amount of parallax-sensitive updating errors, i.e., the direction of the errors depended on the depth of the target relative to fixation. The EEG recordings show a neural correlate of this parallax-sensitive remapping in the alpha-band power at occipito-parietal electrodes. At parietal electrodes, the strength of these alpha-band modulations correlated significantly with updating performance. These results suggest that alpha-band oscillatory activity reflects the time-varying updating of gaze-centered spatial information during parallax-sensitive remapping during whole-body motion.


2019 ◽  
Author(s):  
Steven Marc Weisberg ◽  
Anjan Chatterjee

Background: Reference frames ground spatial communication by mapping ambiguous language (for example, navigation: “to the left”) to properties of the speaker (using a body-based reference frame: “to my left”) or the world (environment-based reference frame: “to the north”). People’s preferences for reference frame vary depending on factors like their culture, the specific task in which they are engaged, and differences among individuals. Although most people are proficient with both reference frames, it is unknown whether preference for reference frames is stable within people or varies based on the specific spatial domain. These alternatives are difficult to adjudicate because navigation is one of few spatial domains that can be naturally solved using multiple reference frames. That is, while spatial navigation directions can be specified using environment-based or body-based reference frames (“go north” vs. “go left”), other spatial domains predominantly use body-based reference frames. Here, we used two domains to test the stability of reference frame preference – one based on navigating a four-way intersection, the other based on the sport of ultimate frisbee. We recruited 58 ultimate frisbee players to complete an online experiment. We measured reaction time and accuracy while participants solved spatial problems in each domain using verbal prompts containing either body- or environment-based reference frames. Details of the task in both domains were kept as similar as possible while remaining ecologically plausible so that reference frame preference could emerge. Results: We pre-registered a prediction that participants would be faster using their preferred reference frame type, and that this advantage would correlate across domains; we did not find such a correlation. Instead, the data reveal that people use distinct reference frames in each domain. Conclusion: This experiment reveals that spatial reference frame types are not stable and may be differentially suited to specific domains. This finding has broad implications for communicating spatial information by offering an important consideration for how spatial reference frames are used in communication: task constraints may affect reference frame choice as much as individual factors or culture.


2021 ◽  
Author(s):  
Che-Sheng Yang ◽  
Jia Liu ◽  
Avinash Singh ◽  
Kuan-Chih Huang ◽  
Chin-Teng Lin

Recent research into navigation strategy of different spatial reference frame proclivities (RFPs) has revealed that the parietal cortex plays an important role in processing allocentric information to provide a translation function between egocentric and allocentric spatial reference frames. However, most studies merely focused on a passive experimental environment, which is not truly representative of our daily spatial learning/navigation tasks. This study investigated the factor associated with brain dynamics that causes people to switch their preferred spatial strategy in different environments in virtual reality (VR) based active navigation task to bridge the gap. High-resolution electroencephalography (EEG) signals were recorded to monitor spectral perturbations on transitions between egocentric and allocentric frames during a path integration task. Our brain dynamics results showed navigation involved areas including the parietal cortex with modulation in the alpha band, the occipital cortex with beta and low gamma band perturbations, and the frontal cortex with theta perturbation. Differences were found between two different turning-angle paths in the alpha band in parietal cluster event-related spectral perturbations (ERSPs). In small turning-angle paths, allocentric participants showed stronger alpha desynchronization than egocentric participants; in large turning-angle paths, participants for two reference frames had a smaller difference in the alpha frequency band. Behavior results of homing errors also corresponded to brain dynamic results, indicating that a larger angle path caused the allocentric to have a higher tendency to become egocentric navigators in the active navigation environment.


Sign in / Sign up

Export Citation Format

Share Document