multisensory interactions
Recently Published Documents


TOTAL DOCUMENTS

139
(FIVE YEARS 35)

H-INDEX

33
(FIVE YEARS 3)

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
John Plass ◽  
David Brang

AbstractMultisensory stimuli speed behavioral responses, but the mechanisms subserving these effects remain disputed. Historically, the observation that multisensory reaction times (RTs) outpace models assuming independent sensory channels has been taken as evidence for multisensory integration (the “redundant target effect”; RTE). However, this interpretation has been challenged by alternative explanations based on stimulus sequence effects, RT variability, and/or negative correlations in unisensory processing. To clarify the mechanisms subserving the RTE, we collected RTs from 78 undergraduates in a multisensory simple RT task. Based on previous neurophysiological findings, we hypothesized that the RTE was unlikely to reflect these alternative mechanisms, and more likely reflected pre-potentiation of sensory responses through crossmodal phase-resetting. Contrary to accounts based on stimulus sequence effects, we found that preceding stimuli explained only 3–9% of the variance in apparent RTEs. Comparing three plausible evidence accumulator models, we found that multisensory RT distributions were best explained by increased sensory evidence at stimulus onset. Because crossmodal phase-resetting increases cortical excitability before sensory input arrives, these results are consistent with a mechanism based on pre-potentiation through phase-resetting. Mathematically, this model entails increasing the prior log-odds of stimulus presence, providing a potential link between neurophysiological, behavioral, and computational accounts of multisensory interactions.


2021 ◽  
Vol 15 ◽  
Author(s):  
Isma Zulfiqar ◽  
Michelle Moerel ◽  
Agustin Lage-Castellanos ◽  
Elia Formisano ◽  
Peter De Weerd

Recent studies have highlighted the possible contributions of direct connectivity between early sensory cortices to audiovisual integration. Anatomical connections between the early auditory and visual cortices are concentrated in visual sites representing the peripheral field of view. Here, we aimed to engage early sensory interactive pathways with simple, far-peripheral audiovisual stimuli (auditory noise and visual gratings). Using a modulation detection task in one modality performed at an 84% correct threshold level, we investigated multisensory interactions by simultaneously presenting weak stimuli from the other modality in which the temporal modulation was barely-detectable (at 55 and 65% correct detection performance). Furthermore, we manipulated the temporal congruence between the cross-sensory streams. We found evidence for an influence of barely-detectable visual stimuli on the response times for auditory stimuli, but not for the reverse effect. These visual-to-auditory influences only occurred for specific phase-differences (at onset) between the modulated audiovisual stimuli. We discuss our findings in the light of a possible role of direct interactions between early visual and auditory areas, along with contributions from the higher-order association cortex. In sum, our results extend the behavioral evidence of audio-visual processing to the far periphery, and suggest – within this specific experimental setting – an asymmetry between the auditory influence on visual processing and the visual influence on auditory processing.


2021 ◽  
Author(s):  
Céline Jost ◽  
Brigitte Le Pévédic ◽  
Gérard Uzan

This paper aims at discussing the interest to use multisensory technologies for humans cognition training. First it introduces multisensory interactions making a focus on advancement in two fields: Human-Computer Interaction and mulsemedia. Second, it presents two different multisensory systems resulting from Robadom and StimSense projects that could be adapted for the community. Then, this paper defines the concept of scenagram and gives its application scopes, boundaries and use cases, offering a first classification of this new concept.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Patrycja Delong ◽  
Uta Noppeney

AbstractInformation integration is considered a hallmark of human consciousness. Recent research has challenged this tenet by showing multisensory interactions in the absence of awareness. This psychophysics study assessed the impact of spatial and semantic correspondences on audiovisual binding in the presence and absence of visual awareness by combining forward–backward masking with spatial ventriloquism. Observers were presented with object pictures and synchronous sounds that were spatially and/or semantically congruent or incongruent. On each trial observers located the sound, identified the picture and rated the picture’s visibility. We observed a robust ventriloquist effect for subjectively visible and invisible pictures indicating that pictures that evade our perceptual awareness influence where we perceive sounds. Critically, semantic congruency enhanced these visual biases on perceived sound location only when the picture entered observers’ awareness. Our results demonstrate that crossmodal influences operating from vision to audition and vice versa are interactively controlled by spatial and semantic congruency in the presence of awareness. However, when visual processing is disrupted by masking procedures audiovisual interactions no longer depend on semantic correspondences.


2021 ◽  
Author(s):  
Dawsyn Borland

This project presents the idea that historic house museums (HHMs) can use Augmented Reality (AR) and physical interactive space to bring stories and characters of the past back to life. Designed to foster self-directed discovery and informal learning of the space and story, this project uses a historically factual AR character to reanimate the sense of human presence within the space. Rather than disrupting the traditional narratives of HHMs, this mixed media storytelling experience extends historical stories by making them more personal and relatable. Using tangible stories, multisensory interactions, and an AR experience to extend the historical narrative, this form of museological work creates more opportunities for empathic character-driven storytelling. Lastly, I identify that this proof of concept could be used in multiple applications, as both a storytelling medium and a communication tool.


2021 ◽  
Author(s):  
Dawsyn Borland

This project presents the idea that historic house museums (HHMs) can use Augmented Reality (AR) and physical interactive space to bring stories and characters of the past back to life. Designed to foster self-directed discovery and informal learning of the space and story, this project uses a historically factual AR character to reanimate the sense of human presence within the space. Rather than disrupting the traditional narratives of HHMs, this mixed media storytelling experience extends historical stories by making them more personal and relatable. Using tangible stories, multisensory interactions, and an AR experience to extend the historical narrative, this form of museological work creates more opportunities for empathic character-driven storytelling. Lastly, I identify that this proof of concept could be used in multiple applications, as both a storytelling medium and a communication tool.


Author(s):  
Collins Opoku-Baah ◽  
Adriana M. Schoenhaut ◽  
Sarah G. Vassall ◽  
David A. Tovar ◽  
Ramnarayan Ramachandran ◽  
...  

AbstractIn a naturalistic environment, auditory cues are often accompanied by information from other senses, which can be redundant with or complementary to the auditory information. Although the multisensory interactions derived from this combination of information and that shape auditory function are seen across all sensory modalities, our greatest body of knowledge to date centers on how vision influences audition. In this review, we attempt to capture the state of our understanding at this point in time regarding this topic. Following a general introduction, the review is divided into 5 sections. In the first section, we review the psychophysical evidence in humans regarding vision’s influence in audition, making the distinction between vision’s ability to enhance versus alter auditory performance and perception. Three examples are then described that serve to highlight vision’s ability to modulate auditory processes: spatial ventriloquism, cross-modal dynamic capture, and the McGurk effect. The final part of this section discusses models that have been built based on available psychophysical data and that seek to provide greater mechanistic insights into how vision can impact audition. The second section reviews the extant neuroimaging and far-field imaging work on this topic, with a strong emphasis on the roles of feedforward and feedback processes, on imaging insights into the causal nature of audiovisual interactions, and on the limitations of current imaging-based approaches. These limitations point to a greater need for machine-learning-based decoding approaches toward understanding how auditory representations are shaped by vision. The third section reviews the wealth of neuroanatomical and neurophysiological data from animal models that highlights audiovisual interactions at the neuronal and circuit level in both subcortical and cortical structures. It also speaks to the functional significance of audiovisual interactions for two critically important facets of auditory perception—scene analysis and communication. The fourth section presents current evidence for alterations in audiovisual processes in three clinical conditions: autism, schizophrenia, and sensorineural hearing loss. These changes in audiovisual interactions are postulated to have cascading effects on higher-order domains of dysfunction in these conditions. The final section highlights ongoing work seeking to leverage our knowledge of audiovisual interactions to develop better remediation approaches to these sensory-based disorders, founded in concepts of perceptual plasticity in which vision has been shown to have the capacity to facilitate auditory learning.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Paul VanGilder ◽  
Ying Shi ◽  
Gregory Apker ◽  
Christopher A. Buneo

AbstractAlthough multisensory integration is crucial for sensorimotor function, it is unclear how visual and proprioceptive sensory cues are combined in the brain during motor behaviors. Here we characterized the effects of multisensory interactions on local field potential (LFP) activity obtained from the superior parietal lobule (SPL) as non-human primates performed a reaching task with either unimodal (proprioceptive) or bimodal (visual-proprioceptive) sensory feedback. Based on previous analyses of spiking activity, we hypothesized that evoked LFP responses would be tuned to arm location but would be suppressed on bimodal trials, relative to unimodal trials. We also expected to see a substantial number of recording sites with enhanced beta band spectral power for only one set of feedback conditions (e.g. unimodal or bimodal), as was previously observed for spiking activity. We found that evoked activity and beta band power were tuned to arm location at many individual sites, though this tuning often differed between unimodal and bimodal trials. Across the population, both evoked and beta activity were consistent with feedback-dependent tuning to arm location, while beta band activity also showed evidence of response suppression on bimodal trials. The results suggest that multisensory interactions can alter the tuning and gain of arm position-related LFP activity in the SPL.


2021 ◽  
Author(s):  
John Plass ◽  
David Brang

Multisensory stimuli speed behavioral responses, but the mechanisms subserving these effects remain disputed. Historically, the observation that multisensory reaction times (RTs) outpace models assuming independent sensory channels has been taken as evidence for multisensory integration (the “redundant target effect”; RTE). However, this interpretation has been challenged by alternative explanations based on stimulus sequence effects, RT variability, and/or negative correlations in unisensory processing. To clarify the mechanisms subserving the RTE, we collected RTs from 78 undergraduates in a multisensory simple RT task. Based on previous neurophysiological findings, we hypothesized that the RTE was unlikely to reflect these alternative mechanisms, and more likely reflected pre-potentiation of sensory responses through crossmodal phase-resetting. Contrary to accounts based on stimulus sequence effects, we found that preceding stimuli explained only 3-9% of the variance in apparent RTEs. Comparing three plausible evidence accumulator models, we found that multisensory RT distributions were best explained by increased sensory evidence at stimulus onset. Because crossmodal phase-resetting increases cortical excitability before sensory input arrives, these results are consistent with a mechanism based on pre-potentiation through phase-resetting. Mathematically, this model entails increasing the prior log-odds of stimulus presence, providing a potential link between neurophysiological, behavioral, and computational accounts of multisensory interactions.


2021 ◽  
Vol 11 (5) ◽  
pp. 57
Author(s):  
Valentina Cesari ◽  
Benedetta Galgani ◽  
Angelo Gemignani ◽  
Danilo Menicucci

Online-learning is a feasible alternative to in-person attendance during COVID-19 pandemic. In this period, information technologies have allowed sharing experiences, but have also highlighted some limitations compared to traditional learning. Learning is strongly supported by some qualities of consciousness such as flow (intended as the optimal state of absorption and engagement activity) and sense of presence (feeling of exerting control, interacting with and getting immersed into real/virtual environments), behavioral, emotional, and cognitive engagement, together with the need for social interaction. During online learning, feelings of disconnection, social isolation, distractions, boredom, and lack of control exert a detrimental effect on the ability to reach the state of flow, the feeling of presence, the feeling of social involvement. Since online environments could prevent the rising of these learning–supporting variables, this article aims at describing the role of flow, presence, engagement, and social interactions during online sessions and at characterizing multisensory stimulations as a driver to cope with these issues. We argue that the use of augmented, mixed, or virtual reality can support the above-mentioned domains, and thus counteract the detrimental effects of physical distance. Such support could be further increased by enhancing multisensory stimulation modalities within augmented and virtual environments.


Sign in / Sign up

Export Citation Format

Share Document