scholarly journals Peripersonal space in the front, rear, left and right directions for audio-tactile multisensory integration

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yusuke Matsuda ◽  
Maki Sugimoto ◽  
Masahiko Inami ◽  
Michiteru Kitazaki

AbstractPeripersonal space (PPS) is important for humans to perform body–environment interactions. However, many previous studies only focused on the specific direction of the PPS, such as the front space, despite suggesting that there were PPSs in all directions. We aimed to measure and compare the peri-trunk PPS in four directions (front, rear, left, and right). To measure the PPS, we used a tactile and an audio stimulus because auditory information is available at any time in all directions. We used the approaching and receding task-irrelevant sounds in the experiment. Observers were asked to respond as quickly as possible when a tactile stimulus was applied to a vibrator on their chest. We found that peri-trunk PPS representations exist with an approaching sound, irrespective of the direction.

2019 ◽  
Vol 6 (8) ◽  
pp. 181878 ◽  
Author(s):  
Lise Hobeika ◽  
Marine Taffou ◽  
Isabelle Viaud-Delmon

Multisensory integration of stimuli occurring in the area surrounding our bodies gives rise to the functional representation of peripersonal space (PPS). PPS extent is flexible according to the affective context and the target of an action, but little is known about how social context modulates it. We used an audiotactile interaction task to investigate PPS of individuals during social interaction. Participants had to detect as fast as possible a tactile stimulus while task-irrelevant looming sounds were presented, while they were paired as collaborative dyads and as competitive dyads. We also measured PPS in participants seated near an inactive individual. PPS boundaries were modulated only when participants collaborated with a partner, in the form of an extension on the right hemispace and independently of the location of the partner. This suggests that space processing is modified during collaborative tasks. During collective actions, a supra-individual representation of the space of action could be at stake in order to adapt our individual motor control to an interaction as a group with the external world. Reassessing multisensory integration in the light of its potential social sensitivity might reveal that low-level mechanisms are modified by the need to interact with others.


2011 ◽  
Vol 24 (4) ◽  
pp. 369-389 ◽  
Author(s):  
Lihan Chen ◽  
Xiaolin Zhou

AbstractApparent motion can occur within a particular modality or between modalities, in which a visual or tactile stimulus at one location is perceived as moving towards the location of the subsequent tactile or visual stimulus. Intramodal apparent motion has been shown to be affected or 'captured' by information from another, task-irrelevant modality, as in spatial or temporal ventriloquism. Here we investigate whether and how intermodal apparent motion is affected by motion direction cues or temporal interval information from a third modality. We demonstrated that both moving and asynchronous static sounds can capture intermodal (visual–tactile and tactile–visual) apparent motion; moreover, while the auditory direction cues have less impact upon the perception of intramodal visual apparent motion than upon the perception of intramodal tactile or intermodal visual/tactile apparent motion, the auditory temporal information has equivalent impacts upon both intramodal and intermodal apparent motion. These findings suggest intermodal apparent motion is susceptible to the influence of dynamic or static auditory information in similar ways as intramodal visual or tactile apparent motion.


2021 ◽  
pp. 174702182199003
Author(s):  
Andy J Kim ◽  
David S Lee ◽  
Brian A Anderson

Previously reward-associated stimuli have consistently been shown to involuntarily capture attention in the visual domain. Although previously reward-associated but currently task-irrelevant sounds have also been shown to interfere with visual processing, it remains unclear whether such stimuli can interfere with the processing of task-relevant auditory information. To address this question, we modified a dichotic listening task to measure interference from task-irrelevant but previously reward-associated sounds. In a training phase, participants were simultaneously presented with a spoken letter and number in different auditory streams and learned to associate the correct identification of each of three letters with high, low, and no monetary reward, respectively. In a subsequent test phase, participants were again presented with the same auditory stimuli but were instead instructed to report the number while ignoring spoken letters. In both the training and test phases, response time measures demonstrated that attention was biased in favour of the auditory stimulus associated with high value. Our findings demonstrate that attention can be biased towards learned reward cues in the auditory domain, interfering with goal-directed auditory processing.


2018 ◽  
Vol 119 (5) ◽  
pp. 1981-1992 ◽  
Author(s):  
Laura Mikula ◽  
Valérie Gaveau ◽  
Laure Pisella ◽  
Aarlenne Z. Khan ◽  
Gunnar Blohm

When reaching to an object, information about the target location as well as the initial hand position is required to program the motor plan for the arm. The initial hand position can be determined by proprioceptive information as well as visual information, if available. Bayes-optimal integration posits that we utilize all information available, with greater weighting on the sense that is more reliable, thus generally weighting visual information more than the usually less reliable proprioceptive information. The criterion by which information is weighted has not been explicitly investigated; it has been assumed that the weights are based on task- and effector-dependent sensory reliability requiring an explicit neuronal representation of variability. However, the weights could also be determined implicitly through learned modality-specific integration weights and not on effector-dependent reliability. While the former hypothesis predicts different proprioceptive weights for left and right hands, e.g., due to different reliabilities of dominant vs. nondominant hand proprioception, we would expect the same integration weights if the latter hypothesis was true. We found that the proprioceptive weights for the left and right hands were extremely consistent regardless of differences in sensory variability for the two hands as measured in two separate complementary tasks. Thus we propose that proprioceptive weights during reaching are learned across both hands, with high interindividual range but independent of each hand’s specific proprioceptive variability. NEW & NOTEWORTHY How visual and proprioceptive information about the hand are integrated to plan a reaching movement is still debated. The goal of this study was to clarify how the weights assigned to vision and proprioception during multisensory integration are determined. We found evidence that the integration weights are modality specific rather than based on the sensory reliabilities of the effectors.


2019 ◽  
Vol 122 (2) ◽  
pp. 552-562 ◽  
Author(s):  
Ayoub Daliri ◽  
Jonathan Dittman

When we produce speech movements, we also predict the auditory consequences of the movements. We use discrepancies between our predictions and incoming auditory information to modify our future movements (adapt). Although auditory errors are crucial for speech motor learning, not all perceived auditory errors are consequences of our own actions. Therefore, the brain needs to evaluate the relevance of perceived auditory errors. In this study, we examined error assessment processes involved in auditory motor adaptation by systematically manipulating the correspondence between speech motor outputs and their auditory consequences during speaking. Participants ( n = 30) produced speech while they received perturbed auditory feedback (e.g., produced “head” but heard a word that sounded like “had”). In one condition, auditory errors were related to participants’ productions (task-relevant errors). In another condition, auditory errors were defined by the experimenter and had no correspondence with participants’ speech output (task-irrelevant errors). We found that the extent of adaptation and error sensitivity (derived from a state-space model) were greater in the condition with task-relevant auditory errors compared with those in the condition with task-irrelevant auditory errors. Additionally, participants with smaller perceptual targets (derived from a categorical perception task) adapted more to auditory perturbations, and participants with larger perceptual targets adapted less. Similarly, participants with smaller perceptual targets were more sensitive to errors in the condition with task-relevant auditory errors. Together, our results highlight the intricate mechanisms, involving both perception and production systems, that the brain uses to optimally integrate auditory errors for successful speech motor learning. NEW & NOTEWORTHY Feedback monitoring is essential for accurate speech production. By providing empirical results and a computational framework, we show that 1) the brain evaluates relevance of auditory errors and responds more to relevant errors, and 2) smaller perceptual targets are associated with more sensitivity to errors and more auditory motor adaptation.


1998 ◽  
Vol 10 (5) ◽  
pp. 581-589 ◽  
Author(s):  
Elisabetta Làdavas ◽  
Giuseppe di Pellegrino ◽  
Alessandro Farnè ◽  
Gabriele Zeloni

Current interpretations of extinction suggest that the disorder is due to an unbalanced competition between ipsilesional and contralesional representations of space. The question addressed in this study is whether the competition between left and right representations of space in one sensory modality (i.e., touch) can be reduced or exacerbated by the activation of an intact spatial representation in a different modality that is functionally linked to the damaged representation (i.e., vision). This hypothesis was tested in 10 right-hemisphere lesioned patients who suffered from reliable tactile extinction. We found that a visual stimulus presented near the patient's ipsilesional hand (i.e., visual peripersonal space) inhibited the processing of a tactile stimulus delivered on the contralesional hand (cross-modal visuotactile extinction) to the same extent as did an ipsilesional tactile stimulation (unimodal tactile extinction). It was also found that a visual stimulus presented near the contralesional hand improved the detection of a tactile stimulus applied to the same hand. In striking contrast, less modulatory effects of vision on touch perception were observed when a visual stimulus was presented far from the space immediately around the patient's hand (i.e., extrapersonal space). This study clearly demonstrates the existence of a visual peripersonal space centered on the hand in humans and its modulatory effects on tactile perception. These findings are explained by referring to the activity of bimodal neurons in premotor and parietal cortex of macaque, which have tactile receptive fields on the hand and corresponding visual receptive fields in the space immediately adjacent to the tactile fields.


2005 ◽  
Vol 16 (3) ◽  
pp. 228-235 ◽  
Author(s):  
Sharon E. Guttman ◽  
Lee A. Gilroy ◽  
Randolph Blake

When the senses deliver conflicting information, vision dominates spatial processing, and audition dominates temporal processing. We asked whether this sensory specialization results in cross-modal encoding of unisensory input into the task-appropriate modality. Specifically, we investigated whether visually portrayed temporal structure receives automatic, obligatory encoding in the auditory domain. In three experiments, observers judged whether the changes in two successive visual sequences followed the same or different rhythms. We assessed temporal representations by measuring the extent to which both task-irrelevant auditory information and task-irrelevant visual information interfered with rhythm discrimination. Incongruent auditory information significantly disrupted task performance, particularly when presented during encoding; by contrast, varying the nature of the rhythm-depicting visual changes had minimal impact on performance. Evidently, the perceptual system automatically and obligatorily abstracts temporal structure from its visual form and represents this structure using an auditory code, resulting in the experience of “hearing visual rhythms.”


Author(s):  
Elke B. Lange ◽  
Jens Fünderich ◽  
Hartmut Grimm

AbstractWe investigated how visual and auditory information contributes to emotion communication during singing. Classically trained singers applied two different facial expressions (expressive/suppressed) to pieces from their song and opera repertoire. Recordings of the singers were evaluated by laypersons or experts, presented to them in three different modes: auditory, visual, and audio–visual. A manipulation check confirmed that the singers succeeded in manipulating the face while keeping the sound highly expressive. Analyses focused on whether the visual difference or the auditory concordance between the two versions determined perception of the audio–visual stimuli. When evaluating expressive intensity or emotional content a clear effect of visual dominance showed. Experts made more use of the visual cues than laypersons. Consistency measures between uni-modal and multimodal presentations did not explain the visual dominance. The evaluation of seriousness was applied as a control. The uni-modal stimuli were rated as expected, but multisensory evaluations converged without visual dominance. Our study demonstrates that long-term knowledge and task context affect multisensory integration. Even though singers’ orofacial movements are dominated by sound production, their facial expressions can communicate emotions composed into the music, and observes do not rely on audio information instead. Studies such as ours are important to understand multisensory integration in applied settings.


2019 ◽  
Vol 31 (8) ◽  
pp. 1155-1172 ◽  
Author(s):  
Jean-Paul Noel ◽  
Andrea Serino ◽  
Mark T. Wallace

The actionable space surrounding the body, referred to as peripersonal space (PPS), has been the subject of significant interest of late within the broader framework of embodied cognition. Neurophysiological and neuroimaging studies have shown the representation of PPS to be built from visuotactile and audiotactile neurons within a frontoparietal network and whose activity is modulated by the presence of stimuli in proximity to the body. In contrast to single-unit and fMRI studies, an area of inquiry that has received little attention is the EEG characterization associated with PPS processing. Furthermore, although PPS is encoded by multisensory neurons, to date there has been no EEG study systematically examining neural responses to unisensory and multisensory stimuli, as these are presented outside, near, and within the boundary of PPS. Similarly, it remains poorly understood whether multisensory integration is generally more likely at certain spatial locations (e.g., near the body) or whether the cross-modal tactile facilitation that occurs within PPS is simply due to a reduction in the distance between sensory stimuli when close to the body and in line with the spatial principle of multisensory integration. In the current study, to examine the neural dynamics of multisensory processing within and beyond the PPS boundary, we present auditory, visual, and audiovisual stimuli at various distances relative to participants' reaching limit—an approximation of PPS—while recording continuous high-density EEG. We question whether multisensory (vs. unisensory) processing varies as a function of stimulus–observer distance. Results demonstrate a significant increase of global field power (i.e., overall strength of response across the entire electrode montage) for stimuli presented at the PPS boundary—an increase that is largest under multisensory (i.e., audiovisual) conditions. Source localization of the major contributors to this global field power difference suggests neural generators in the intraparietal sulcus and insular cortex, hubs for visuotactile and audiotactile PPS processing. Furthermore, when neural dynamics are examined in more detail, changes in the reliability of evoked potentials in centroparietal electrodes are predictive on a subject-by-subject basis of the later changes in estimated current strength at the intraparietal sulcus linked to stimulus proximity to the PPS boundary. Together, these results provide a previously unrealized view into the neural dynamics and temporal code associated with the encoding of nontactile multisensory around the PPS boundary.


Sign in / Sign up

Export Citation Format

Share Document