scholarly journals Dissecting neural circuits for multisensory integration and crossmodal processing

2015 ◽  
Vol 370 (1677) ◽  
pp. 20140203 ◽  
Author(s):  
Jeffrey M. Yau ◽  
Gregory C. DeAngelis ◽  
Dora E. Angelaki

We rely on rich and complex sensory information to perceive and understand our environment. Our multisensory experience of the world depends on the brain's remarkable ability to combine signals across sensory systems. Behavioural, neurophysiological and neuroimaging experiments have established principles of multisensory integration and candidate neural mechanisms. Here we review how targeted manipulation of neural activity using invasive and non-invasive neuromodulation techniques have advanced our understanding of multisensory processing. Neuromodulation studies have provided detailed characterizations of brain networks causally involved in multisensory integration. Despite substantial progress, important questions regarding multisensory networks remain unanswered. Critically, experimental approaches will need to be combined with theory in order to understand how distributed activity across multisensory networks collectively supports perception.

2018 ◽  
Author(s):  
Gareth Harris ◽  
Taihong Wu ◽  
Gaia Linfield ◽  
Myung-Kyu Choi ◽  
He Liu ◽  
...  

AbstractIn the natural environment, animals often encounter multiple sensory cues that are simultaneously present. The nervous system integrates the relevant sensory information to generate behavioral responses that have adaptive values. However, the signal transduction pathways and the molecules that regulate integrated behavioral response to multiple sensory cues are not well defined. Here, we characterize a collective modulatory basis for a behavioral decision in C. elegans when the animal is presented with an attractive food source together with a repulsive odorant. We show that distributed neuronal components in the worm nervous system and several neuromodulators orchestrate the decision-making process, suggesting that various states and contexts may modulate the multisensory integration. Among these modulators, we identify a new function of a conserved TGF-β pathway that regulates the integrated decision by inhibiting the signaling from a set of central neurons. Interestingly, we find that a common set of modulators, including the TGF-β pathway, regulate the integrated response to the pairing of different foods and repellents. Together, our results provide insights into the modulatory signals regulating multisensory integration and reveal potential mechanistic basis for the complex pathology underlying defects in multisensory processing shared by common neurological diseases.Author SummaryThe present study characterizes the modulation of a behavioral decision in C. elegans when the worm is presented with a food lawn that is paired with a repulsive smell. We show that multiple sensory neurons and interneurons play roles in making the decision. We also identify several modulatory molecules that are essential for the integrated decision when the animal faces a choice between the cues of opposing valence. We further show that many of these factors, which often represent different states and contexts, are common for behavioral decisions that integrate sensory information from different types of foods and repellents. Overall, our results reveal a collective molecular and cellular basis for integration of simultaneously present attractive and repulsive cues to fine-tune decision-making.


2018 ◽  
Author(s):  
Wen-Hao Zhang ◽  
He Wang ◽  
Aihua Chen ◽  
Yong Gu ◽  
Tai Sing Lee ◽  
...  

Abstract Our brain perceives the world by exploiting multiple sensory modalities to extract information about various aspects of external stimuli. If these sensory cues are from the same stimulus of interest, they should be integrated to improve perception; otherwise, they should be segregated to distinguish different stimuli. In reality, however, the brain faces the challenge of recognizing stimuli without knowing in advance whether sensory cues come from the same or different stimuli. To address this challenge and to recognize stimuli rapidly, we argue that the brain should carry out multisensory integration and segregation concurrently with complementary neuron groups. Studying an example of inferring heading-direction via visual and vestibular cues, we develop a concurrent multisensory processing neural model which consists of two reciprocally connected modules, the dorsal medial superior temporal area (MSTd) and the ventral intraparietal area (VIP), and that at each module, there exists two distinguishing groups of neurons, congruent and opposite neurons. Specifically, congruent neurons implement cue integration, while opposite neurons compute the cue disparity, both optimally as described by Bayesian inference. The two groups of neurons provide complementary information which enables the neural system to assess the validity of cue integration and, if necessary, to recover the lost information associated with individual cues without re-gathering new inputs. Through this process, the brain achieves rapid stimulus perception if the cues come from the same stimulus of interest, and differentiates and recognizes stimuli based on individual cues with little time delay if the cues come from different stimuli of interest. Our study unveils the indispensable role of opposite neurons in multisensory processing and sheds light on our understanding of how the brain achieves multisensory processing efficiently and rapidly.Significance StatementOur brain perceives the world by exploiting multiple sensory cues. These cues need to be integrated to improve perception if they come from the same stimulus and otherwise be segregated. To address the challenge of recognizing whether sensory cues come from the same or different stimuli that are unknown in advance, we propose that the brain should carry out multisensory integration and segregation concurrently with two different neuron groups. Specifically, congruent neurons implement cue integration, while opposite neurons compute the cue disparity, and the interplay between them achieves rapid stimulus recognition without information loss. We apply our model to the example of inferring heading-direction based on visual and vestibular cues and reproduce the experimental data successfully.


2004 ◽  
Vol 27 (3) ◽  
pp. 377-396 ◽  
Author(s):  
Rick Grush

The emulation theory of representation is developed and explored as a framework that can revealingly synthesize a wide variety of representational functions of the brain. The framework is based on constructs from control theory (forward models) and signal processing (Kalman filters). The idea is that in addition to simply engaging with the body and environment, the brain constructs neural circuits that act as models of the body and environment. During overt sensorimotor engagement, these models are driven by efference copies in parallel with the body and environment, in order to provide expectations of the sensory feedback, and to enhance and process sensory information. These models can also be run off-line in order to produce imagery, estimate outcomes of different actions, and evaluate and develop motor plans. The framework is initially developed within the context of motor control, where it has been shown that inner models running in parallel with the body can reduce the effects of feedback delay problems. The same mechanisms can account for motor imagery as the off-line driving of the emulator via efference copies. The framework is extended to account for visual imagery as the off-line driving of an emulator of the motor-visual loop. I also show how such systems can provide for amodal spatial imagery. Perception, including visual perception, results from such models being used to form expectations of, and to interpret, sensory input. I close by briefly outlining other cognitive functions that might also be synthesized within this framework, including reasoning, theory of mind phenomena, and language.


2002 ◽  
Vol 88 (1) ◽  
pp. 540-543 ◽  
Author(s):  
John J. Foxe ◽  
Glenn R. Wylie ◽  
Antigona Martinez ◽  
Charles E. Schroeder ◽  
Daniel C. Javitt ◽  
...  

Using high-field (3 Tesla) functional magnetic resonance imaging (fMRI), we demonstrate that auditory and somatosensory inputs converge in a subregion of human auditory cortex along the superior temporal gyrus. Further, simultaneous stimulation in both sensory modalities resulted in activity exceeding that predicted by summing the responses to the unisensory inputs, thereby showing multisensory integration in this convergence region. Recently, intracranial recordings in macaque monkeys have shown similar auditory-somatosensory convergence in a subregion of auditory cortex directly caudomedial to primary auditory cortex (area CM). The multisensory region identified in the present investigation may be the human homologue of CM. Our finding of auditory-somatosensory convergence in early auditory cortices contributes to mounting evidence for multisensory integration early in the cortical processing hierarchy, in brain regions that were previously assumed to be unisensory.


Author(s):  
Samantha Hughes ◽  
Tansu Celikel

From single-cell organisms to complex neural networks, all evolved to provide control solutions to generate context and goal-specific actions. Neural circuits performing sensorimotor computation to drive navigation employ inhibitory control as a gating mechanism, as they hierarchically transform (multi)sensory information into motor actions. Here, we focus on this literature to critically discuss the proposition that prominent inhibitory projections form sensorimotor circuits. After reviewing the neural circuits of navigation across various invertebrate species, we argue that with increased neural circuit complexity and the emergence of parallel computations inhibitory circuits acquire new functions. The contribution of inhibitory neurotransmission for navigation goes beyond shaping the communication that drives motor neurons, instead, include encoding of emergent sensorimotor representations. A mechanistic understanding of the neural circuits performing sensorimotor computations in invertebrates will unravel the minimum circuit requirements driving adaptive navigation.


2019 ◽  
Author(s):  
David A. Tovar ◽  
Micah M. Murray ◽  
Mark T. Wallace

AbstractObjects are the fundamental building blocks of how we create a representation of the external world. One major distinction amongst objects is between those that are animate versus inanimate. Many objects are specified by more than a single sense, yet the nature by which multisensory objects are represented by the brain remains poorly understood. Using representational similarity analysis of human EEG signals, we show enhanced encoding of audiovisual objects when compared to their corresponding visual and auditory objects. Surprisingly, we discovered the often-found processing advantages for animate objects was not evident in a multisensory context due to greater neural enhancement of inanimate objects—the more weakly encoded objects under unisensory conditions. Further analysis showed that the selective enhancement of inanimate audiovisual objects corresponded with an increase in shared representations across brain areas, suggesting that neural enhancement was mediated by multisensory integration. Moreover, a distance-to-bound analysis provided critical links between neural findings and behavior. Improvements in neural decoding at the individual exemplar level for audiovisual inanimate objects predicted reaction time differences between multisensory and unisensory presentations during a go/no-go animate categorization task. Interestingly, links between neural activity and behavioral measures were most prominent 100 to 200ms and 350 to 500ms after stimulus presentation, corresponding to time periods associated with sensory evidence accumulation and decision-making, respectively. Collectively, these findings provide key insights into a fundamental process the brain uses to maximize information it captures across sensory systems to perform object recognition.Significance StatementOur world is filled with an ever-changing milieu of sensory information that we are able to seamlessly transform into meaningful perceptual experience. We accomplish this feat by combining different features from our senses to construct objects. However, despite the fact that our senses do not work in isolation but rather in concert with each other, little is known about how the brain combines the senses together to form object representations. Here, we used EEG and machine learning to study how the brain processes auditory, visual, and audiovisual objects. Surprisingly, we found that non-living objects, the objects which were more difficult to process with one sense alone, benefited the most from engaging multiple senses.


2006 ◽  
Vol os13 (1) ◽  
pp. 37-41 ◽  
Author(s):  
Aylin Baysan ◽  
Edward Lynch

A previous paper, recently published in Primary Dental Care, gave an overview of the medical uses of ozone and outlined some of its uses in dentistry. The current paper focuses on a description of use of ozone in the management of root caries and considers recent studies in this area. There has been relatively limited research into the non-invasive (pharmaceutical) management of root caries. The best management strategy still remains to be developed. Initial studies have indicated that an application of ozone for a period of either 10 or 20 seconds is capable of clinically reversing leathery root carious lesions. It is suggested that, subject to confirmation from extensive trials, this simple and non-invasive technique may benefit many patients with root caries throughout the world since this approach to treat root caries can easily be employed in primary care clinics and in the domiciliary treatment of home-bound elderly people and immobile patients in hospices and hospitals.


Author(s):  
Aglaia Tourimpampa ◽  
Athanasios Drigas ◽  
Alexandra Economou ◽  
Petros Roussos

This study is a comprehensive attempt to assess the impact of the cognitive skill of perception in the ability to comprehend a text. More specifically, it investigates the function of perception as a primary structure of the human brain to contact the world and examines the certain cognitive processes of perception that affect text comprehension. It is also presented the relation between cognitive perception and the linguistic approach of pragmatics in order the subject to comprehend the text. Perception is the organization, identification and interpretation of sensory information in order to represent and understand the environment. Pragmatics is the linguistic field that studies how people comprehend and produce speech or a text as a communicative act. Furthermore, it features the current scientific achievements on the ICTs processes and tools, which exploit the assessment of perception in text comprehension.


Author(s):  
Paolo Solari ◽  
Giorgia Sollai ◽  
Francesco Palmas ◽  
Andrea Sabatini ◽  
Roberto Crnjar

The integration of sensory information with adequate motor outputs is critical for animal survival. Here, we present an innovative technique based on a non-invasive closed-circuit device consisting of a perfusion/stimulation chamber chronically applied on a single leg of the crayfish Procambarus clarkii. Using this technique, we focally stimulated the leg inside the chamber and studied the leg-dependent sensory-motor integration involving other sensory appendages, such as antennules and maxillipeds, which remain unstimulated outside the chamber. Results show that the stimulation of a single leg with chemicals, such as disaccharides, is sufficient to trigger a complex search behaviour involving locomotion coupled with the reflex activation of antennules and maxillipeds. This technique can be easily adapted to other decapods and/or other sensory appendages. Thus, it has opened possibilities for studying sensory-motor integration evoked by leg stimulation in whole aquatic animals under natural conditions to supplement, with a direct approach, current ablation/silencing techniques.


Sign in / Sign up

Export Citation Format

Share Document