Integration of Sensory Information in the Brain

Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 369-369
Author(s):  
B E Stein

That sensory cues in one modality affect perception in another has been known for some time, and there are many examples of ‘intersensory’ influences within the broad phenomenon of cross-modal integration. The ability of the CNS to integrate cues from different sensory channels is particularly evident in the facilitated detection and reaction to combinations of concordant cues from different modalities, and in the dramatic perceptual anomalies that can occur when these cues are discordant. A substrate for multisensory integration is provided by the many CNS neurons (eg, in the superior colliculus) which receive convergent input from multiple sensory modalities. Similarities in the principles by which these neurons integrate multisensory information in different species point to a remarkable conservation in the integrative features of the CNS during vertebrate evolution. In general, profound enhancement or depression in neural activity can be induced in the same neuron, depending on the spatial and temporal relationships among the stimuli presented to it. The specific response product obtained in any given multisensory neuron is predictable on the basis of the features of its various receptive fields. Perhaps most striking, however, is the parallel which has been demonstrated between the properties of multisensory integration at the level of the single neuron in the superior colliculus and at the level of overt attentive and orientation behaviour.

Author(s):  
Caroline A. Miller ◽  
Laura L. Bruce

The first visual cortical axons arrive in the cat superior colliculus by the time of birth. Adultlike receptive fields develop slowly over several weeks following birth. The developing cortical axons go through a sequence of changes before acquiring their adultlike morphology and function. To determine how these axons interact with neurons in the colliculus, cortico-collicular axons were labeled with biocytin (an anterograde neuronal tracer) and studied with electron microscopy.Deeply anesthetized animals received 200-500 nl injections of biocytin (Sigma; 5% in phosphate buffer) in the lateral suprasylvian visual cortical area. After a 24 hr survival time, the animals were deeply anesthetized and perfused with 0.9% phosphate buffered saline followed by fixation with a solution of 1.25% glutaraldehyde and 1.0% paraformaldehyde in 0.1M phosphate buffer. The brain was sectioned transversely on a vibratome at 50 μm. The tissue was processed immediately to visualize the biocytin.


2011 ◽  
Vol 106 (4) ◽  
pp. 1862-1874 ◽  
Author(s):  
Jan Churan ◽  
Daniel Guitton ◽  
Christopher C. Pack

Our perception of the positions of objects in our surroundings is surprisingly unaffected by movements of the eyes, head, and body. This suggests that the brain has a mechanism for maintaining perceptual stability, based either on the spatial relationships among visible objects or internal copies of its own motor commands. Strong evidence for the latter mechanism comes from the remapping of visual receptive fields that occurs around the time of a saccade. Remapping occurs when a single neuron responds to visual stimuli placed presaccadically in the spatial location that will be occupied by its receptive field after the completion of a saccade. Although evidence for remapping has been found in many brain areas, relatively little is known about how it interacts with sensory context. This interaction is important for understanding perceptual stability more generally, as the brain may rely on extraretinal signals or visual signals to different degrees in different contexts. Here, we have studied the interaction between visual stimulation and remapping by recording from single neurons in the superior colliculus of the macaque monkey, using several different visual stimulus conditions. We find that remapping responses are highly sensitive to low-level visual signals, with the overall luminance of the visual background exerting a particularly powerful influence. Specifically, although remapping was fairly common in complete darkness, such responses were usually decreased or abolished in the presence of modest background illumination. Thus the brain might make use of a strategy that emphasizes visual landmarks over extraretinal signals whenever the former are available.


2019 ◽  
Author(s):  
David A. Tovar ◽  
Micah M. Murray ◽  
Mark T. Wallace

AbstractObjects are the fundamental building blocks of how we create a representation of the external world. One major distinction amongst objects is between those that are animate versus inanimate. Many objects are specified by more than a single sense, yet the nature by which multisensory objects are represented by the brain remains poorly understood. Using representational similarity analysis of human EEG signals, we show enhanced encoding of audiovisual objects when compared to their corresponding visual and auditory objects. Surprisingly, we discovered the often-found processing advantages for animate objects was not evident in a multisensory context due to greater neural enhancement of inanimate objects—the more weakly encoded objects under unisensory conditions. Further analysis showed that the selective enhancement of inanimate audiovisual objects corresponded with an increase in shared representations across brain areas, suggesting that neural enhancement was mediated by multisensory integration. Moreover, a distance-to-bound analysis provided critical links between neural findings and behavior. Improvements in neural decoding at the individual exemplar level for audiovisual inanimate objects predicted reaction time differences between multisensory and unisensory presentations during a go/no-go animate categorization task. Interestingly, links between neural activity and behavioral measures were most prominent 100 to 200ms and 350 to 500ms after stimulus presentation, corresponding to time periods associated with sensory evidence accumulation and decision-making, respectively. Collectively, these findings provide key insights into a fundamental process the brain uses to maximize information it captures across sensory systems to perform object recognition.Significance StatementOur world is filled with an ever-changing milieu of sensory information that we are able to seamlessly transform into meaningful perceptual experience. We accomplish this feat by combining different features from our senses to construct objects. However, despite the fact that our senses do not work in isolation but rather in concert with each other, little is known about how the brain combines the senses together to form object representations. Here, we used EEG and machine learning to study how the brain processes auditory, visual, and audiovisual objects. Surprisingly, we found that non-living objects, the objects which were more difficult to process with one sense alone, benefited the most from engaging multiple senses.


2007 ◽  
Vol 97 (1) ◽  
pp. 921-926 ◽  
Author(s):  
Mark T. Wallace ◽  
Barry E. Stein

Multisensory integration refers to the process by which the brain synthesizes information from different senses to enhance sensitivity to external events. In the present experiments, animals were reared in an altered sensory environment in which visual and auditory stimuli were temporally coupled but originated from different locations. Neurons in the superior colliculus developed a seemingly anomalous form of multisensory integration in which spatially disparate visual-auditory stimuli were integrated in the same way that neurons in normally reared animals integrated visual-auditory stimuli from the same location. The data suggest that the principles governing multisensory integration are highly plastic and that there is no a priori spatial relationship between stimuli from different senses that is required for their integration. Rather, these principles appear to be established early in life based on the specific features of an animal's environment to best adapt it to deal with that environment later in life.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 35-35 ◽  
Author(s):  
M T Wallace

Multisensory integration in the superior colliculus (SC) of the cat requires a protracted postnatal developmental time course. Kittens 3 – 135 days postnatal (dpn) were examined and the first neuron capable of responding to two different sensory inputs (auditory and somatosensory) was not seen until 12 dpn. Visually responsive multisensory neurons were not encountered until 20 dpn. These early multisensory neurons responded weakly to sensory stimuli, had long response latencies, large receptive fields, and poorly developed response selectivities. Most striking, however, was their inability to integrate cross-modality cues in order to produce the significant response enhancement or depression characteristic of these neurons in adults. The incidence of multisensory neurons increased gradually over the next 10 – 12 weeks. During this period, sensory responses became more robust, latencies shortened, receptive fields decreased in size, and unimodal selectivities matured. The first neurons capable of cross-modality integration were seen at 28 dpn. For the following two months, the incidence of such integrative neurons rose gradually until adult-like values were achieved. Surprisingly, however, as soon as a multisensory neuron exhibited this capacity, most of its integrative features were indistinguishable from those in adults. Given what is known about the requirements for multisensory integration in adult animals, this observation suggests that the appearance of multisensory integration reflects the onset of functional corticotectal inputs.


1996 ◽  
Vol 76 (2) ◽  
pp. 1246-1266 ◽  
Author(s):  
M. T. Wallace ◽  
L. K. Wilkinson ◽  
B. E. Stein

1. The properties of visual-, auditory-, and somatosensory-responsive neurons, as well as of neurons responsive to multiple sensory cues (i.e., multisensory), were examined in the superior colliculus of the rhesus monkey. Although superficial layer neurons responded exclusively to visual stimuli and visual inputs predominated in deeper layers, there was also a rich nonvisual and multisensory representation in the superior colliculus. More than a quarter (27.8%) of the deep layer population responded to stimuli from more than a single sensory modality. In contrast, 37% responded only to visual cues, 17.6% to auditory cues, and 17.6% to somatosensory cues. Unimodal- and multisensory-responsive neurons were clustered by modality. Each of these modalities was represented in map-like fashion, and the different representations were in alignment with one another. 2. Most deep layer visually responsive neurons were binocular and exhibited poor selectivity for such stimulus characteristics as orientation, velocity, and direction of movement. Similarly, most auditory-responsive neurons had contralateral receptive fields and were binaural, but had little frequency selectivity and preferred complex, broad-band sounds. Somatosensory-responsive neurons were overwhelmingly contralateral, high velocity, and rapidly adapting. Only rarely did somatosensory-responsive neurons require distortion of subcutaneous tissue for activation. 3. The spatial congruence among the different receptive fields of multisensory neurons was a critical feature underlying their ability to synthesize cross-modal information. 4. Combinations of stimuli could have very different consequences in the same neuron, depending on their temporal and spatial relationships. Generally, multisensory interactions were evident when pairs of stimuli were separated from one another by < 500 ms, and the products of these interactions far exceeded the sum of their unimodal components. Whether the combination of stimuli produced response enhancement, response depression, or no interaction depended on the location of the stimuli relative to one another and to their respective receptive fields. Maximal response enhancements were observed when stimuli originated from similar locations in space (as when derived from the same event) because they fell within the excitatory receptive fields of the same multisensory neurons. If, however, the stimuli were spatially disparate such that one fell beyond the excitatory borders of its receptive field, either no interaction was produced or this stimulus depressed the effectiveness of the other. Furthermore, maximal response interactions were seen with the pairing of weakly effective unimodal stimuli. As the individual unimodal stimuli became increasingly effective, the levels of response enhancement to stimulus combinations declined, a principle referred to as inverse effectiveness. Many of the integrative principles seen here in the primate superior colliculus are strikingly similar to those observed in the cat. These observations indicate that a set of common principles of multisensory integration is adaptable in widely divergent species living in very different ecological situations. 5. Surprisingly, a few multisensory neurons had individual receptive fields that were not in register with one another. This has not been noted in multisensory neurons of other species, and these "anomalous" receptive fields could present a daunting problem: stimuli originating from the same general location in space cannot simultaneously fall within their respective receptive fields, a stimulus pairing that may result in response depression. Conversely, stimuli that originate from separate events and disparate locations (and fall within their receptive fields) may result in response enhancement. However, the spatial principle of multisensory integration did not apply in these cases. (ABSTRACT TRUNCATED)


2020 ◽  
Author(s):  
Tijl Grootswagers ◽  
Amanda K Robinson ◽  
Sophia M Shatek ◽  
Thomas A Carlson

The human brain prioritises relevant sensory information to perform different tasks. Enhancement of task-relevant information requires flexible allocation of attentional resources, but it is still a mystery how this is operationalised in the brain. We investigated how attentional mechanisms operate in situations where multiple stimuli are presented in the same location and at the same time. In two experiments, participants performed a challenging two-back task on different types of visual stimuli that were presented simultaneously and superimposed over each other. Using electroencephalography and multivariate decoding, we analysed the effect of attention on the neural responses to each individual stimulus. Whole brain neural responses contained considerable information about both the attended and unattended stimuli, even though they were presented simultaneously and represented in overlapping receptive fields. As expected, attention increased the decodability of stimulus-related information contained in the neural responses, but this effect was evident earlier for stimuli that were presented at smaller sizes. Our results show that early neural responses to stimuli in fast-changing displays contain remarkable information about the sensory environment but are also modulated by attention in a manner dependent on perceptual characteristics of the relevant stimuli. Stimuli, code, and data for this study can be found at https://osf.io/7zhwp/.


2005 ◽  
Vol 93 (5) ◽  
pp. 2575-2586 ◽  
Author(s):  
Thomas J. Perrault ◽  
J. William Vaughan ◽  
Barry E. Stein ◽  
Mark T. Wallace

Many neurons in the superior colliculus (SC) integrate sensory information from multiple modalities, giving rise to significant response enhancements. Although enhanced multisensory responses have been shown to depend on the spatial and temporal relationships of the stimuli as well as on their relative effectiveness, these factors alone do not appear sufficient to account for the substantial heterogeneity in the magnitude of the multisensory products that have been observed. Toward this end, the present experiments have revealed that there are substantial differences in the operations used by different multisensory SC neurons to integrate their cross-modal inputs, suggesting that intrinsic differences in these neurons may also play an important deterministic role in multisensory integration. In addition, the integrative operation employed by a given neuron was found to be well correlated with the neuron's dynamic range. In total, four categories of SC neurons were identified based on how their multisensory responses changed relative to the predicted addition of the two unisensory inputs as stimulus effectiveness was altered. Despite the presence of these categories, a general rule was that the most robust multisensory enhancements were seen with combinations of the least effective unisensory stimuli. Together, these results provide a better quantitative picture of the integrative operations performed by multisensory SC neurons and suggest mechanistic differences in the way in which these neurons synthesize cross-modal information.


1996 ◽  
Vol 75 (6) ◽  
pp. 2467-2485 ◽  
Author(s):  
M. S. Livingstone

1. This work explores a mechanism that the brain may use for linking related percepts. It has been proposed that temporal relationships in the firing of neurons may be important in indicating how the stimuli that activate those neurons are related in the external world. Such temporal relationships cannot be seen with conventional receptive field mapping but require cross-correlation and auto-correlation analysis. 2. In the cat and the macaque monkey, cells with similar receptive field properties show correlated firing even when their receptive fields do not overlap. Here I report that in the squirrel monkey, as in the cat, pairs of cells < or = 5 mm apart can show correlated firing, and these correlations between pairs of cells are often stronger when they are stimulated by a single contour. This suggests that the correlations reflect not only permanent connections between cells with similar receptive fields, but in addition may encode information that the activating stimuli are continuous or part of a single object. I also find that, as in the cat, and contrary to some other reports on experiments in monkeys, the correlated firing is often rhythmic. These recordings further indicate that periods of rhythmicity are associated with stronger interneuronal synchrony, which is consistent with the hypothesis that recurrent feedback loops are involved in generating both. 3. Pairs of cells in the same cortical column, but at different depths also showed correlated firing, but with several milliseconds difference in timing between layers. This was true for cells at different depths within layer 2/3 and for pairs of cells in different layers (2/3 vs. 4B or 4C alpha), providing evidence for cross-talk between the magno- and parvocellular streams.


2007 ◽  
Vol 97 (5) ◽  
pp. 3193-3205 ◽  
Author(s):  
Juan Carlos Alvarado ◽  
J. William Vaughan ◽  
Terrence R. Stanford ◽  
Barry E. Stein

The present study suggests that the neural computations used to integrate information from different senses are distinct from those used to integrate information from within the same sense. Using superior colliculus neurons as a model, it was found that multisensory integration of cross-modal stimulus combinations yielded responses that were significantly greater than those evoked by the best component stimulus. In contrast, unisensory integration of within-modal stimulus pairs yielded responses that were similar to or less than those evoked by the best component stimulus. This difference is exemplified by the disproportionate representations of superadditive responses during multisensory integration and the predominance of subadditive responses during unisensory integration. These observations suggest that different rules have evolved for integrating sensory information, one (unisensory) reflecting the inherent characteristics of the individual sense and, the other (multisensory), unique supramodal characteristics designed to enhance the salience of the initiating event.


Sign in / Sign up

Export Citation Format

Share Document