scholarly journals The relationship between multisensory associative learning and multisensory integration

2020 ◽  
Author(s):  
Sébastien Á. Lauzon ◽  
Arin. E. Abraham ◽  
Kristina Curcin ◽  
Ryan A. Stevenson

AbstractOur perception of the world around us is inherently multisensory, and integrating sensory information from multiple modalities leads to more precise and efficient perception and behaviour. Determining which sensory information from different modalities should be perceptually bound is a key component of multisensory integration. To accomplish this feat, our sensory systems rely on both low-level stimulus features, as well as multisensory associations learned throughout development based on the statistics of our environment. The present study explored the relationship between multisensory associative learning and multisensory integration using encephalography (EEG) and behavioural measures. Sixty-one participants completed a three-phase study. First, participants were exposed to novel pairings audiovisual shape-tone pairings with frequent and infrequent stimulus pairings and complete a target detection task. EEG recordings of the mismatch negativity (MMN) and P3 were calculated as neural indices of multisensory associative learning. Next, the same learned stimulus pairs presented in audiovisual as well as unisensory auditory and visual modalities while both early (<120 ms) and late neural indices of multisensory integration were recorded. Finally, participants completed an analogous behavioural speeded-response task, with behavioural indices of multisensory gain calculated using the race model. Significant relationships were found in fronto-central and occipital areas between neural measures of associative learning and both early and late indices of multisensory integration in frontal and centro-parietal areas, respectively. Participants who showed stronger indices of associative learning also exhibited stronger indices of multisensory integration of the stimuli they learned to associate. Furthermore, a significant relationship was found between neural index of early multisensory integration and behavioural indices of multisensory gain. These results provide insight into the neural underpinnings of how higher-order processes such as associative learning guide multisensory integration.

2018 ◽  
Author(s):  
Douglas A. Ruff ◽  
Marlene R. Cohen

AbstractVisual attention dramatically improves subjects’ ability to see and also modulates the responses of neurons in every known visual and oculomotor area, but whether those modulations can account for perceptual improvements remains unclear. We measured the relationship between populations of visual neurons, oculomotor neurons, and behavior during detection and discrimination tasks. We found that neither of the two prominent hypothesized neuronal mechanisms underlying attention (which concern changes in information coding and the way sensory information is read out) provide a satisfying account of the observed behavioral improvements. Instead, our results are more consistent with the novel hypothesis that attention reshapes the representation of attended stimuli to more effectively influence behavior. Our results suggest a path toward understanding the neural underpinnings of perception and cognition in health and disease by analyzing neuronal responses in ways that are constrained by behavior and interactions between brain areas.


eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Hame Park ◽  
Christoph Kayser

Perception adapts to mismatching multisensory information, both when different cues appear simultaneously and when they appear sequentially. While both multisensory integration and adaptive trial-by-trial recalibration are central for behavior, it remains unknown whether they are mechanistically linked and arise from a common neural substrate. To relate the neural underpinnings of sensory integration and recalibration, we measured whole-brain magnetoencephalography while human participants performed an audio-visual ventriloquist task. Using single-trial multivariate analysis, we localized the perceptually-relevant encoding of multisensory information within and between trials. While we found neural signatures of multisensory integration within temporal and parietal regions, only medial superior parietal activity encoded past and current sensory information and mediated the perceptual recalibration within and between trials. These results highlight a common neural substrate of sensory integration and perceptual recalibration, and reveal a role of medial parietal regions in linking present and previous multisensory evidence to guide adaptive behavior.


2019 ◽  
Author(s):  
Hame Park ◽  
Christoph Kayser

AbstractMultisensory stimuli create behavioral flexibility, e.g. by allowing us to derive a weighted combination of the information received by different senses. They also allow perception to adapt to discrepancies in the sensory world, e.g. by biasing the judgement of unisensory cues based on preceding multisensory evidence. While both facets of multisensory perception are central for behavior, it remains unknown whether they arise from a common neural substrate. In fact, very little is known about the neural mechanisms underlying multisensory perceptual recalibration. To reveal these, we measured whole-brain activity using MEG while human participants performed an audio-visual ventriloquist paradigm designed to reveal multisensory integration within a trial, and the (trial-by-trial) recalibration of subsequent unisensory judgements. Using single trial classification and behavioral modelling, we localized the encoding of sensory information within and between trials, and determined the behavioral relevance of candidate neural representations. While we found neural signatures of perceptual integration within temporal and parietal regions, of these, only medial superior parietal activity retained multisensory information between trials and combined this with current evidence to mediate perceptual recalibration. These results suggest a common neural substrate of sensory integration and trial-by-trial perceptual recalibration, and expose the medial superior parietal cortex as a flexible hub that links present and previous evidence within and between senses to guide behavior.


2016 ◽  
Vol 27 (12) ◽  
pp. 1632-1643 ◽  
Author(s):  
Eran Eldar ◽  
Yael Niv ◽  
Jonathan D. Cohen

When perceiving rich sensory information, some people may integrate its various aspects, whereas other people may selectively focus on its most salient aspects. We propose that neural gain modulates the trade-off between breadth and selectivity, such that high gain focuses perception on those aspects of the information that have the strongest, most immediate influence, whereas low gain allows broader integration of different aspects. We illustrate our hypothesis using a neural-network model of ambiguous-letter perception. We then report an experiment demonstrating that, as predicted by the model, pupil-diameter indices of higher gain are associated with letter perception that is more selectively focused on the letter’s shape or, if primed, its semantic content. Finally, we report a recognition-memory experiment showing that the relationship between gain and selective processing also applies when the influence of different stimulus features is voluntarily modulated by task demands.


2021 ◽  
pp. 108705472097279
Author(s):  
Alessio Bellato ◽  
Iti Arora ◽  
Puja Kochhar ◽  
Chris Hollis ◽  
Madeleine J. Groom

We investigated autonomic arousal, attention and response conflict, in ADHD and autism. Heart rate variability (HRV), and behavioral/electrophysiological indices of performance, were recorded during a task with low and high levels of response conflict in 78 children/adolescents (7–15 years old) with ADHD, autism, comorbid ADHD+autism, or neurotypical. ANOVA models were used to investigate effects of ADHD and autism, while a mediation model was tested to clarify the relationship between ADHD and slower performance. Slower and less accurate performance characterized ADHD and autism; however, atypical electrophysiological indices differently characterized these conditions. The relationship between ADHD and slower task performance was mediated by reduced HRV in response to the cue stimulus. Autonomic hypo-arousal and difficulties in mobilizing energetic resources in response to sensory information (associated with ADHD), and atypical electrophysiological indices of information processing (associated with autism), might negatively affect cognitive performance in those with ADHD+autism.


1999 ◽  
Vol 85 (3) ◽  
pp. 1011-1024
Author(s):  
Amy M. Richards ◽  
E. Evan Krauter

Prospective memory refers to remembering to perform a previously planned activity. Two experiments were conducted to see if effects of cue competition similar to blocking and overshadowing occur in prospective memory. Participants were led to believe that the experiments were about the relationship between memory and creativity. To test prospective memory, participants were instructed to mark cue words that would appear later in a task requiring the generation of sentences. In Exp. 1 ( N = 119) one group was told to place an “x” over the cue word “rake”; a second was told to mark two words of equal salience (“method” and “rake”); and a third group was told to mark two cue words of unequal salience (the highly salient word “monad” and “rake”). “Rake” was the only cue word that actually appeared in the task involving generation of sentences. Participants instructed to place an “x” over one cue marked the target cue “rake” more frequently than if told to mark two cues (an overshadowing-like effect). The frequency of marking “rake” was lowest on the first test trial if participants had been instructed to mark both “rake” and “monad.” In Exp. 2 (N = 43) a blocking group was trained to mark one cue word (“rake”) and a control group received no training. Two days later, all participants were instructed to mark two cues (“rake” and “method”) during a task involving the generation of sentences. Prior training interfered with performance to a new cue (“method”) given in combination with the pretrained cue (“rake,” a blocking-like effect). These experiments demonstrate the existence of cue competition in prospective memory and suggest the possibility of applying theories of elementary associative learning to the study of prospective memory.


2017 ◽  
Vol 8 (5) ◽  
pp. 158
Author(s):  
Robert S.P. Jones

James Joyce’s Portrait of the Artist as a Young Man has fascinated readers for more than a century and there are layers of psychological meaning to be found throughout the novel. The novel is the perfect vehicle to discuss the relationship between form language and emotion as Joyce deliberately manipulated the emotional response of the reader through innovations in form and language, departing dramatically from previous literary traditions. This paper attempts to take a fresh look at the novel from a psychological perspective and seeks to examine underlying conditioning processes at work in the narrative – particularly the concept of associative learning. Understanding emotional responses to different stimuli is the bedrock of psychological investigation and 100 years after the date of its publication, Portrait of an Artist presents remarkably fresh insights into the human experience of emotion. Despite its age, Portrait of the Artist contains many contemporary psychological insights.


2019 ◽  
Author(s):  
David A. Tovar ◽  
Micah M. Murray ◽  
Mark T. Wallace

AbstractObjects are the fundamental building blocks of how we create a representation of the external world. One major distinction amongst objects is between those that are animate versus inanimate. Many objects are specified by more than a single sense, yet the nature by which multisensory objects are represented by the brain remains poorly understood. Using representational similarity analysis of human EEG signals, we show enhanced encoding of audiovisual objects when compared to their corresponding visual and auditory objects. Surprisingly, we discovered the often-found processing advantages for animate objects was not evident in a multisensory context due to greater neural enhancement of inanimate objects—the more weakly encoded objects under unisensory conditions. Further analysis showed that the selective enhancement of inanimate audiovisual objects corresponded with an increase in shared representations across brain areas, suggesting that neural enhancement was mediated by multisensory integration. Moreover, a distance-to-bound analysis provided critical links between neural findings and behavior. Improvements in neural decoding at the individual exemplar level for audiovisual inanimate objects predicted reaction time differences between multisensory and unisensory presentations during a go/no-go animate categorization task. Interestingly, links between neural activity and behavioral measures were most prominent 100 to 200ms and 350 to 500ms after stimulus presentation, corresponding to time periods associated with sensory evidence accumulation and decision-making, respectively. Collectively, these findings provide key insights into a fundamental process the brain uses to maximize information it captures across sensory systems to perform object recognition.Significance StatementOur world is filled with an ever-changing milieu of sensory information that we are able to seamlessly transform into meaningful perceptual experience. We accomplish this feat by combining different features from our senses to construct objects. However, despite the fact that our senses do not work in isolation but rather in concert with each other, little is known about how the brain combines the senses together to form object representations. Here, we used EEG and machine learning to study how the brain processes auditory, visual, and audiovisual objects. Surprisingly, we found that non-living objects, the objects which were more difficult to process with one sense alone, benefited the most from engaging multiple senses.


2020 ◽  
Vol 123 (6) ◽  
pp. 2406-2425
Author(s):  
Tyler R. Sizemore ◽  
Laura M. Hurley ◽  
Andrew M. Dacks

The serotonergic system has been widely studied across animal taxa and different functional networks. This modulatory system is therefore well positioned to compare the consequences of neuromodulation for sensory processing across species and modalities at multiple levels of sensory organization. Serotonergic neurons that innervate sensory networks often bidirectionally exchange information with these networks but also receive input representative of motor events or motivational state. This convergence of information supports serotonin’s capacity for contextualizing sensory information according to the animal’s physiological state and external events. At the level of sensory circuitry, serotonin can have variable effects due to differential projections across specific sensory subregions, as well as differential serotonin receptor type expression within those subregions. Functionally, this infrastructure may gate or filter sensory inputs to emphasize specific stimulus features or select among different streams of information. The near-ubiquitous presence of serotonin and other neuromodulators within sensory regions, coupled with their strong effects on stimulus representation, suggests that these signaling pathways should be considered integral components of sensory systems.


2017 ◽  
Vol 117 (4) ◽  
pp. 1569-1580 ◽  
Author(s):  
Nienke B. Debats ◽  
Marc O. Ernst ◽  
Herbert Heuer

Humans are well able to operate tools whereby their hand movement is linked, via a kinematic transformation, to a spatially distant object moving in a separate plane of motion. An everyday example is controlling a cursor on a computer monitor. Despite these separate reference frames, the perceived positions of the hand and the object were found to be biased toward each other. We propose that this perceptual attraction is based on the principles by which the brain integrates redundant sensory information of single objects or events, known as optimal multisensory integration. That is, 1) sensory information about the hand and the tool are weighted according to their relative reliability (i.e., inverse variances), and 2) the unisensory reliabilities sum up in the integrated estimate. We assessed whether perceptual attraction is consistent with optimal multisensory integration model predictions. We used a cursor-control tool-use task in which we manipulated the relative reliability of the unisensory hand and cursor position estimates. The perceptual biases shifted according to these relative reliabilities, with an additional bias due to contextual factors that were present in experiment 1 but not in experiment 2. The biased position judgments’ variances were, however, systematically larger than the predicted optimal variances. Our findings suggest that the perceptual attraction in tool use results from a reliability-based weighting mechanism similar to optimal multisensory integration, but that certain boundary conditions for optimality might not be satisfied. NEW & NOTEWORTHY Kinematic tool use is associated with a perceptual attraction between the spatially separated hand and the effective part of the tool. We provide a formal account for this phenomenon, thereby showing that the process behind it is similar to optimal integration of sensory information relating to single objects.


Sign in / Sign up

Export Citation Format

Share Document