multisensory input
Recently Published Documents


TOTAL DOCUMENTS

27
(FIVE YEARS 7)

H-INDEX

9
(FIVE YEARS 1)

2021 ◽  
Vol 12 ◽  
Author(s):  
Nasim Boustani ◽  
Reza Pishghadam ◽  
Shaghayegh Shayesteh

Multisensory input is an aid to language comprehension; however, it remains to be seen to what extent various combinations of senses may affect the P200 component and attention-related cognitive processing associated with L2 sentence comprehension along with the N400 as a later component. To this aim, we provided some multisensory input (enriched with data from three (i.e., exvolvement) and five senses (i.e., involvement)) for a list of unfamiliar words to 18 subjects. Subsequently, the words were embedded in an acceptability judgment task with 360 pragmatically correct and incorrect sentences. The task, along with the ERP recording, was conducted after a 1-week consolidation period to track any possible behavioral and electrophysiological distinctions in the retrieval of information with various sense combinations. According to the behavioral results, we found that the combination of five senses leads to more accurate and quicker responses. Based on the electrophysiological results, the combination of five senses induced a larger P200 amplitude compared to the three-sense combination. The implication is that as the sensory weight of the input increases, vocabulary retrieval is facilitated and more attention is directed to the overall comprehension of L2 sentences which leads to more accurate and quicker responses. This finding was not, however, reflected in the neural activity of the N400 component.


Author(s):  
Rena Bayramova ◽  
Irene Valori ◽  
Phoebe E. McKenna-Plumley ◽  
Claudio Zandonella Callegher ◽  
Teresa Farroni

AbstractPast research on the advantages of multisensory input for remembering spatial information has mainly focused on memory for objects or surrounding environments. Less is known about the role of cue combination in memory for own body location in space. In a previous study, we investigated participants’ accuracy in reproducing a rotation angle in a self-rotation task. Here, we focus on the memory aspect of the task. Participants had to rotate themselves back to a specified starting position in three different sensory conditions: a blind condition, a condition with disrupted proprioception, and a condition where both vision and proprioception were reliably available. To investigate the difference between encoding and storage phases of remembering proprioceptive information, rotation amplitude and recall delay were manipulated. The task was completed in a real testing room and in immersive virtual reality (IVR) simulations of the same environment. We found that proprioceptive accuracy is lower when vision is not available and that performance is generally less accurate in IVR. In reality conditions, the degree of rotation affected accuracy only in the blind condition, whereas in IVR, it caused more errors in both the blind condition and to a lesser degree when proprioception was disrupted. These results indicate an improvement in encoding own body location when vision and proprioception are optimally integrated. No reliable effect of delay was found.


2021 ◽  
Author(s):  
Sevan K Harootonian ◽  
Arne D Ekstrom ◽  
Robert C Wilson

Successful navigation requires the ability to compute one’s location and heading from incoming multisensory information. Previous work has shown that this multisensory input comes in two forms: body-based idiothetic cues, from one’s own rotations and translations, and visual allothetic cues, from the environment (usually visual landmarks). However, exactly how these two streams of information are integrated is unclear, with some models suggesting the body-based idiothetic and visual allothetic cues are combined, while others suggest they compete. In this paper we investigated the integration of bodybased idiothetic and visual allothetic cues in the computation of heading using virtual reality. In our experiment, participants performed a series of body turns of up to 360 degrees in the dark with only a brief flash (300ms) of visual feedback en route. Because the environment was virtual, we had full control over the visual feedback and were able to vary the offset between this feedback and the true heading angle. By measuring the effect of the feedback offset on the angle participants turned, we were able to determine the extent to which they incorporated visual feedback as a function of the offset error. By further modeling this behavior we were able to quantify the computations people used. While there were considerable individual differences in performance on our task, with some participants mostly ignoring the visual feedback and others relying on it almost entirely, our modeling results suggest that almost all participants used the same strategy in which idiothetic and allothetic cues are combined when the mismatch between them is small, but compete when the mismatch is large. These findings suggest that participants update their estimate of heading using a hybrid strategy that mixes the combination and competition of cues.


Author(s):  
M. Ertl ◽  
P. Zu Eulenburg ◽  
M. Woller ◽  
M. Dieterich

AbstractThe successful cortical processing of multisensory input typically requires the integration of data represented in different reference systems to perform many fundamental tasks, such as bipedal locomotion. Animal studies have provided insights into the integration processes performed by the neocortex and have identified region specific tuning curves for different reference frames during ego-motion. Yet, there remains almost no data on this topic in humans.In this study, an experiment originally performed in animal research with the aim to identify brain regions modulated by the position of the head and eyes relative to a translational ego-motion was adapted for humans. Subjects sitting on a motion platform were accelerated along a translational pathway with either eyes and head aligned or a 20° yaw-plane offset relative to the motion direction while EEG was recorded.Using a distributed source localization approach, it was found that activity in area PFm, a part of Brodmann area 40, was modulated by the congruency of translational motion direction, eye, and head position. In addition, an asymmetry between the hemispheres in the opercular-insular region was observed during the cortical processing of the vestibular input. A frequency specific analysis revealed that low-frequency oscillations in the delta- and theta-band are modulated by vestibular stimulation. Source-localization estimated that the observed low-frequency oscillations are generated by vestibular core-regions, such as the parieto-opercular region and frontal areas like the mid-orbital gyrus and the medial frontal gyrus.


2021 ◽  
pp. 016502542097936
Author(s):  
Michele T. Diaz ◽  
Ege Yalcinbas

Although hearing often declines with age, prior research has shown that older adults may benefit from multisensory input to a greater extent when compared to younger adults, a concept known as inverse effectiveness. While there is behavioral evidence in support of this phenomenon, less is known about its neural basis. The present functional MRI (fMRI) study examined how older and younger adults processed multimodal auditory-visual (AV) phonemic stimuli which were either congruent or incongruent across modalities. Incongruent AV pairs were designed to elicit the McGurk effect. Behaviorally, reaction times were significantly faster during congruent trials compared to incongruent trials for both age-groups, and overall older adults responded more slowly. The interaction was not significant, suggesting that older adults processed the AV stimuli similarly to younger adults. Although there were minimal behavioral differences, age-related differences in functional activation were identified: Younger adults elicited greater activation than older adults in primary sensory regions including superior temporal gyrus, the calcarine fissure, and left postcentral gyrus. In contrast, older adults elicited greater activation than younger adults in dorsal frontal regions including middle and superior frontal gyri, as well as dorsal parietal regions. These data suggest that while there is age-related stability in behavioral sensitivity to multimodal stimuli, the neural bases for this effect differed between older and younger adults. Our results demonstrated that older adults underrecruited primary sensory cortices and had increased recruitment of regions involved in executive function, attention, and monitoring processes, which may reflect an attempt to compensate.


Author(s):  
Roland Pfister ◽  
Annika L. Klaffehn ◽  
Andreas Kalckert ◽  
Wilfried Kunde ◽  
David Dignath

AbstractBody representations are readily expanded based on sensorimotor experience. A dynamic view of body representations, however, holds that these representations cannot only be expanded but that they can also be narrowed down by disembodying elements of the body representation that are no longer warranted. Here we induced illusory ownership in terms of a moving rubber hand illusion and studied the maintenance of this illusion across different conditions. We observed ownership experience to decrease gradually unless participants continued to receive confirmatory multisensory input. Moreover, a single instance of multisensory mismatch – a hammer striking the rubber hand but not the real hand – triggered substantial and immediate disembodiment. Together, these findings support and extend previous theoretical efforts to model body representations through basic mechanisms of multisensory integration. They further support an updating model suggesting that embodied entities fade from the body representation if they are not refreshed continuously.


2019 ◽  
Vol 31 (4) ◽  
pp. 592-606 ◽  
Author(s):  
Laura Crucianelli ◽  
Yannis Paloyelis ◽  
Lucia Ricciardi ◽  
Paul M. Jenkinson ◽  
Aikaterini Fotopoulou

Multisensory integration processes are fundamental to our sense of self as embodied beings. Bodily illusions, such as the rubber hand illusion (RHI) and the size–weight illusion (SWI), allow us to investigate how the brain resolves conflicting multisensory evidence during perceptual inference in relation to different facets of body representation. In the RHI, synchronous tactile stimulation of a participant's hidden hand and a visible rubber hand creates illusory body ownership; in the SWI, the perceived size of the body can modulate the estimated weight of external objects. According to Bayesian models, such illusions arise as an attempt to explain the causes of multisensory perception and may reflect the attenuation of somatosensory precision, which is required to resolve perceptual hypotheses about conflicting multisensory input. Recent hypotheses propose that the precision of sensorimotor representations is determined by modulators of synaptic gain, like dopamine, acetylcholine, and oxytocin. However, these neuromodulatory hypotheses have not been tested in the context of embodied multisensory integration. The present, double-blind, placebo-controlled, crossover study ( n = 41 healthy volunteers) aimed to investigate the effect of intranasal oxytocin (IN-OT) on multisensory integration processes, tested by means of the RHI and the SWI. Results showed that IN-OT enhanced the subjective feeling of ownership in the RHI, only when synchronous tactile stimulation was involved. Furthermore, IN-OT increased an embodied version of the SWI (quantified as estimation error during a weight estimation task). These findings suggest that oxytocin might modulate processes of visuotactile multisensory integration by increasing the precision of top–down signals against bottom–up sensory input.


2018 ◽  
Vol 32 (10) ◽  
pp. 847-862 ◽  
Author(s):  
Brian M. Sandroff ◽  
Robert W. Motl ◽  
William R. Reed ◽  
Aron K. Barbey ◽  
Ralph H. B. Benedict ◽  
...  

There is a proliferation of research examining the effects of exercise on mobility and cognition in the general population and those with neurological disorders as well as focal research examining possible neural mechanisms of such effects. However, there is seemingly a lack of focus on what it is about exercise, in particular, that drives adaptive central nervous system neuroplasticity. We propose a novel conceptual framework (ie, PRIMERS) that describes such adaptations as occurring via activity-dependent neuroplasticity based on the integrative processing of multisensory input and associated complex motor output that is required for the regulation of physiological systems during exercise behavior. This conceptual framework sets the stage for the systematic examination of the effects of exercise on brain connectivity, brain structure, and molecular/cellular mechanisms that explain improvements in mobility and cognition in the general population and persons with multiple sclerosis (MS). We argue that exercise can be viewed as an integrative, systems-wide stimulus for neurorehabilitation because impaired mobility and cognition are common and co-occurring in MS.


2018 ◽  
pp. 32-37
Author(s):  
Petra Pohl

The Ronnie Gardiner Method (RGM) is an innovative, practitioner-led, music-based intervention using sensorimotor and cognitive integration. RGM was originally developed by the Swedish musician Ronnie Gardiner. Since 2010, RGM has been successfully implemented within neurorehabilitation in many countries. The purpose of this article is to outline some of the theoretical assumptions underpinning the potential benefits from this intervention, using Parkinson’s disease as an example. RGM is based on principles of neuroplasticity, motor learning, and postural control, and uses energizing, beat-based music to provide multisensory input (visual, audio, kinetic, and tactile) in order to stimulate experience-dependent neuroplastic processes. It aims at stimulating cognitive and motor function (e.g., memory, concentration, executive function, multitasking, coordination, mobility, balance, and motor skills). In addition, it may aid body awareness, self-esteem, and social skills. RGM has been scientifically evaluated as a means of multimodal sensory stimulation after stroke and as a means of improving mobility and cognitive function in Parkinson’s disease. RGM is a complex multi-task intervention with the potential to be beneficial in different settings and in different neurological conditions. It can be performed either while standing up or sitting down and can be practiced with the advantages gained as a group activity or individually, which makes it very flexible. It is currently being used as rehabilitation activity for people with stroke, Parkinson’s disease, multiple sclerosis, dementia, and depression. Furthermore, RGM is used in programs targeting healthy aging, ADHD, autism, and dyslexia, and in ordinary school environments.


PeerJ ◽  
2018 ◽  
Vol 6 ◽  
pp. e5206 ◽  
Author(s):  
Tasha R. Stanton ◽  
Helen R. Gilpin ◽  
Louisa Edwards ◽  
G. Lorimer Moseley ◽  
Roger Newport

Background Experimental and clinical evidence support a link between body representations and pain. This proof-of-concept study in people with painful knee osteoarthritis (OA) aimed to determine if: (i) visuotactile illusions that manipulate perceived knee size are analgesic; (ii) cumulative analgesic effects occur with sustained or repeated illusions. Methods Participants with knee OA underwent eight conditions (order randomised): stretch and shrink visuotactile (congruent) illusions and corresponding visual, tactile and incongruent control conditions. Knee pain intensity (0–100 numerical rating scale; 0 = no pain at all and 100 = worst pain imaginable) was assessed pre- and post-condition. Condition (visuotactile illusion vs control) × Time (pre-/post-condition) repeated measure ANOVAs evaluated the effect on pain. In each participant, the most beneficial illusion was sustained for 3 min and was repeated 10 times (each during two sessions); paired t-tests compared pain at time 0 and 180s (sustained) and between illusion 1 and illusion 10 (repeated). Results Visuotactile illusions decreased pain by an average of 7.8 points (95% CI [2.0–13.5]) which corresponds to a 25% reduction in pain, but the tactile only and visual only control conditions did not (Condition × Time interaction: p = 0.028). Visuotactile illusions did not differ from incongruent control conditions where the same visual manipulation occurred, but did differ when only the same tactile input was applied. Sustained illusions prolonged analgesia, but did not increase it. Repeated illusions increased the analgesic effect with an average pain decrease of 20 points (95% CI [6.9–33.1])–corresponding to a 40% pain reduction. Discussion Visuotactile illusions are analgesic in people with knee OA. Our results suggest that visual input plays a critical role in pain relief, but that analgesia requires multisensory input. That visual and tactile input is needed for analgesia, supports multisensory modulation processes as a possible explanatory mechanism. Further research exploring the neural underpinnings of these visuotactile illusions is needed. For potential clinical applications, future research using a greater dosage in larger samples is warranted.


Sign in / Sign up

Export Citation Format

Share Document