scholarly journals Integration of Visual and Auditory Information by Superior Temporal Sulcus Neurons Responsive to the Sight of Actions

2005 ◽  
Vol 17 (3) ◽  
pp. 377-391 ◽  
Author(s):  
Nick E. Barraclough* ◽  
Dengke Xiao* ◽  
Chris I. Baker ◽  
Mike W. Oram ◽  
David I. Perrett

Processing of complex visual stimuli comprising facial movements, hand actions, and body movements is known to occur in the superior temporal sulcus (STS) of humans and nonhuman primates. The STS is also thought to play a role in the integration of multimodal sensory input. We investigated whether STS neurons coding the sight of actions also integrated the sound of those actions. For 23% of neurons responsive to the sight of an action, the sound of that action significantly modulated the visual response. The sound of the action increased or decreased the visually evoked response for an equal number of neurons. In the neurons whose visual response was increased by the addition of sound (but not those neurons whose responses were decreased), the audiovisual integration was dependent upon the sound of the action matching the sight of the action. These results suggest that neurons in the STS form multisensory representations of observed actions.

2019 ◽  
Author(s):  
Milton A. V. Ávila ◽  
Rafael N. Ruggiero ◽  
João P. Leite ◽  
Lezio S. Bueno-Junior ◽  
Cristina M. Del-Ben

ABSTRACTAudiovisual integration may improve unisensory perceptual performance and learning. Interestingly, this integration may occur even when one of the sensory modalities is not conscious to the subject, e.g., semantic auditory information may impact nonconscious visual perception. Studies have shown that the flow of nonconscious visual information is mostly restricted to early cortical processing, without reaching higher-order areas such as the parieto-frontal network. Thus, because multisensory cortical interactions may already occur in early stages of processing, we hypothesized that nonconscious visual stimulation might facilitate auditory pitch learning. In this study we used a pitch learning paradigm, in which individuals had to identify six pitches in a scale with constant intervals of 50 cents. Subjects were assigned to one of three training groups: the test group (Auditory + congruent unconscious visual, AV), and two control groups (Auditory only, A, and Auditory + incongruent unconscious visual, AVi). Auditory-only tests were done before and after training in all groups. Electroencephalography (EEG) was recorded throughout the experiment. Results show that the test group (AV, with congruent nonconscious visual stimuli) performed better during the training, and showed a greater improvement from pre-to post-test. Control groups did not differ from one another. Changes in the AV group were mainly due to performances in the first and last pitches of the scale. We also observed consistent EEG patterns associated with this performance improvement in the AV group, especially maintenance of higher theta-band power after training in central and temporal areas, and stronger theta-band synchrony between visual and auditory cortices. Therefore, we show that nonconscious multisensory interactions are powerful enough to boost auditory perceptual learning, and that increased functional connectivity between early visual and auditory cortices after training might play a role in this effect. Moreover, we provide a methodological contribution for future studies on auditory perceptual learning, particularly those applied to relative and absolute pitch training.


2021 ◽  
Author(s):  
Aaron M. Williams ◽  
Christopher F. Angeloni ◽  
Maria Neimark Geffen

In everyday life, we integrate visual and auditory information in routine tasks such as navigation and communication. While it is known that concurrent sound can improve visual perception, the neuronal correlates of this audiovisual integration are not fully understood. Specifically, it remains unknown whether improvement of the detection and discriminability of visual stimuli due to sound is reflected in the neuronal firing patterns in the primary visual cortex (V1). Furthermore, presentation of the sound can induce movement in the subject, but little is understood about whether and how sound-induced movement contributes to V1 neuronal activity. Here, we investigated how sound and movement interact to modulate V1 visual responses in awake, head-fixed mice and whether this interaction improves neuronal encoding of the visual stimulus. We presented visual drifting gratings with and without simultaneous auditory white noise to awake mice while recording mouse movement and V1 neuronal activity. Sound modulated the light-evoked activity of 80% of light-responsive neurons, with 95% of neurons exhibiting increased activity when the auditory stimulus was present. Sound consistently induced movement. However, a generalized linear model revealed that sound and movement had distinct and complementary effects of the neuronal visual responses. Furthermore, decoding of the visual stimulus from the neuronal activity was improved with sound, an effect that persisted even when controlling for movement. These results demonstrate that sound and movement modulate visual responses in complementary ways, resulting in improved neuronal representation of the visual stimulus. This study clarifies the role of movement as a potential confound in neuronal audiovisual responses, and expands our knowledge of how multi-modal processing is mediated at a neuronal level in the awake brain.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Stefano Rozzi ◽  
Marco Bimbi ◽  
Alfonso Gravante ◽  
Luciano Simone ◽  
Leonardo Fogassi

AbstractThe ventral part of lateral prefrontal cortex (VLPF) of the monkey receives strong visual input, mainly from inferotemporal cortex. It has been shown that VLPF neurons can show visual responses during paradigms requiring to associate arbitrary visual cues to behavioral reactions. Further studies showed that there are also VLPF neurons responding to the presentation of specific visual stimuli, such as objects and faces. However, it is largely unknown whether VLPF neurons respond and differentiate between stimuli belonging to different categories, also in absence of a specific requirement to actively categorize or to exploit these stimuli for choosing a given behavior. The first aim of the present study is to evaluate and map the responses of neurons of a large sector of VLPF to a wide set of visual stimuli when monkeys simply observe them. Recent studies showed that visual responses to objects are also present in VLPF neurons coding action execution, when they are the target of the action. Thus, the second aim of the present study is to compare the visual responses of VLPF neurons when the same objects are simply observed or when they become the target of a grasping action. Our results indicate that: (1) part of VLPF visually responsive neurons respond specifically to one stimulus or to a small set of stimuli, but there is no indication of a “passive” categorical coding; (2) VLPF neuronal visual responses to objects are often modulated by the task conditions in which the object is observed, with the strongest response when the object is target of an action. These data indicate that VLPF performs an early passive description of several types of visual stimuli, that can then be used for organizing and planning behavior. This could explain the modulation of visual response both in associative learning and in natural behavior.


1988 ◽  
Vol 60 (5) ◽  
pp. 1615-1637 ◽  
Author(s):  
K. Hikosaka ◽  
E. Iwai ◽  
H. Saito ◽  
K. Tanaka

1. We examined the sensory properties of cells in the anterior bank of the caudal part of the superior temporal sulcus (caudal STS) in anesthetized, paralyzed monkeys to visual, auditory, and somesthetic stimuli. 2. In the anterior bank of the caudal STS, there were three regions distinguishable from each other and also from the middle temporal area (MT) in the floor of the STS and area Tpt in the superior temporal gyrus. The three regions were located approximately in the respective inner, middle, and outer thirds of the anterior bank of the caudal STS. These three regions are referred to, from the inner to the outer, as the medial superior temporal region (MST), the mostly unresponsive region, and the caudal STS polysensory region (cSTP), respectively. 3. The extent of MST and its response properties agreed with previous studies. Cells in MST responded exclusively to visual stimuli, had large visual receptive fields (RFs), and nearly all (91%) showed directional selectivity. 4. In the mostly unresponsive region, three quarters of cells were unresponsive to any stimulus used in this study. A quarter of the cells responded to only visual stimuli and most did not show directional selectivity for moving stimuli. Several directionally selective cells responded to movements of three-dimensional objects, but not of projected stimuli. 5. The response properties of cells in the superficial cortex of the caudal superior temporal gyrus, a part of area Tpt, external to cSTP were different from those of cells in the three regions in the anterior bank of the STS. Cells in Tpt were exclusively auditory, and had much larger auditory RFs (mean = 271 degrees) than those of acoustically-driven cSTP cells (mean = 138 degrees). 6. The cSTP contained unimodal visual, auditory, and somesthetic cells as well as multimodal cells of two or all three modalities. The sensory properties of cSTP cells were as follows. 1) Out of 200 cells recorded, 102 (51%) cells were unimodal (59 visual, 33 auditory, and 10 somesthetic), 36 (18%) cells were bimodal (21 visual+auditory, 7 visual+somesthetic, and 8 auditory+somesthetic), and four (2%) cells were trimodal. Visual and auditory responses were more frequent than somesthetic responses: the ratio of the population of cells driven by visual: auditory: somesthetic stimuli was 3:2:1. 2) Visual RFs were large (mean diameter, 59 degrees), but two-thirds were limited to the contralateral visual hemifield. About half the cells showed directional selectivity for moving visual stimuli. None showed selectivity for particular visual shapes.(ABSTRACT TRUNCATED AT 400 WORDS)


1990 ◽  
Vol 63 (3) ◽  
pp. 502-522 ◽  
Author(s):  
R. Lal ◽  
M. J. Friedlander

1. Extracellular recordings were made from single neurons in layer A of the left dorsal lateral geniculate nucleus (LGNd) of anesthetized and paralyzed adult cats. Responses to retinotopically identical visual stimuli (presented through the right eye) were recorded at several positions of the left eye in its orbit. Visual stimuli consisted of drifting sinusoidal gratings of optimal temporal and spatial frequencies at twice threshold contrast. Visual stimulation of the left eye was blocked by a variety of methods, including intravitreal injection of tetrodotoxin (TTX). The change in position of the left eye was achieved by passive movements in a randomized and interleaved fashion. Of 237 neurons studied, responses were obtained from 143 neurons on 20-100 trials of identical visual stimulation at each of six eye positions. Neurons were classified as X- or Y- on the basis of a standard battery of physiological tests (primarily linearity of spatial summation and response latency to electrical stimulation of the optic chiasm). 2. The effect of eye position on the visual response of the 143 neurons was analyzed with respect to the number of action potentials elicited and the peak firing rate. Fifty-seven (40%) neurons had a significant effect [by one-factor repeated-measure analysis of variance (ANOVA), P less than 0.05] of eye position on the visual response by either criterion (number of action potentials or peak firing rate). Of these 57 neurons, 47 had a significant effect (P less than 0.05) with respect to the number of action potentials and 23 had a significant effect (P less than 0.05) by both criteria. Thus the permissive measure by either criterion and the conservative measure by both criteria resulted in 40% and 16%, respectively, of all neurons' visual responses being significantly affected by eye position. 3. For the 47 neurons with a significant effect of eye position (number of action potentials criterion), a trend analysis of eye position versus visual response showed a linear trend (P less than 0.05) for 9 neurons, a quadratic trend (P less than 0.05) for 32 neurons, and no significant trend for the 6 remaining neurons. The trends were approximated with linear and nonlinear gain fields (range of eye position change over which the visual response was modulated). The gain fields of individual neurons were compared by measuring the normalized gain (change in neuronal response per degree change of eye position). The mean normalized gain for the 47 neurons was 4.3. 4. The nonlinear gain fields were generally symmetric with respect to nasal versus temporal changes in eye position.(ABSTRACT TRUNCATED AT 400 WORDS)


Author(s):  
Yoshinao Kajikawa ◽  
Arnaud Falchier ◽  
Gabriella Musacchia ◽  
Peter Lakatos ◽  
Charles Schroeder

2020 ◽  
pp. 095679762095485
Author(s):  
Mathieu Landry ◽  
Jason Da Silva Castanheira ◽  
Jérôme Sackur ◽  
Amir Raz

Suggestions can cause some individuals to miss or disregard existing visual stimuli, but can they infuse sensory input with nonexistent information? Although several prominent theories of hypnotic suggestion propose that mental imagery can change our perceptual experience, data to support this stance remain sparse. The present study addressed this lacuna, showing how suggesting the presence of physically absent, yet critical, visual information transforms an otherwise difficult task into an easy one. Here, we show how adult participants who are highly susceptible to hypnotic suggestion successfully hallucinated visual occluders on top of moving objects. Our findings support the idea that, at least in some people, suggestions can add perceptual information to sensory input. This observation adds meaningful weight to theoretical, clinical, and applied aspects of the brain and psychological sciences.


i-Perception ◽  
2018 ◽  
Vol 9 (6) ◽  
pp. 204166951881570
Author(s):  
Sachiyo Ueda ◽  
Ayane Mizuguchi ◽  
Reiko Yakushijin ◽  
Akira Ishiguchi

To overcome limitations in perceptual bandwidth, humans condense various features of the environment into summary statistics. Variance constitutes indices that represent diversity within categories and also the reliability of the information regarding that diversity. Studies have shown that humans can efficiently perceive variance for visual stimuli; however, to enhance perception of environments, information about the external world can be obtained from multisensory modalities and integrated. Consequently, this study investigates, through two experiments, whether the precision of variance perception improves when visual information (size) and corresponding auditory information (pitch) are integrated. In Experiment 1, we measured the correspondence between visual size and auditory pitch for each participant by using adjustment measurements. The results showed a linear relationship between size and pitch—that is, the higher the pitch, the smaller the corresponding circle. In Experiment 2, sequences of visual stimuli were presented both with and without linked auditory tones, and the precision of perceived variance in size was measured. We consequently found that synchronized presentation of audio and visual stimuli that have the same variance improves the precision of perceived variance in size when compared with visual-only presentation. This suggests that audiovisual information may be automatically integrated in variance perception.


1990 ◽  
Vol 63 (3) ◽  
pp. 523-538 ◽  
Author(s):  
R. Lal ◽  
M. J. Friedlander

1. The nature and time window of interaction between passive phasic eye movement signals and visual stimuli were studied for dorsal lateral geniculate nucleus (LGNd) neurons in the cat. Extracellular recordings were made from single neurons in layer A of the left LGNd of anesthetized paralyzed cats in response to a normalized visual stimulus presented to the right eye at each of several times of movement of the left eye. The left eye was moved passively at a fixed amplitude and velocity while varying the movement onset time with respect to the visual stimulus onset in a randomized and interleaved fashion. Visual stimuli consisted of square-wave modulated circular spots of appropriate contrast, sign, and size to elicit an optimal excitatory response when placed in the neurons' receptive-field (RF) center. 2. Interactions were analyzed for 78 neurons (33 X-neurons, 43 Y-neurons, and 2 physiologically unclassified neurons) on 25-65 trials of identical visual stimuli for each of eight times of eye movement. 3. Sixty percent (47/78) of the neurons tested had a significant eye movement effect (ANOVA, P less than 0.05) on some aspect of their visual response. Of these 47 neurons, 42 (89%) had a significant (P less than 0.05) effect of an appropriately timed eye movement on the number of action potentials, 36 (77%) had a significant effect on the mean peak firing rate, and 31 (66%) were significantly affected as evaluated by both criteria. 4. The eye movement effect on the neurons' visual responses was primarily facilitatory. Facilitation was observed for 37 (79%) of the affected neurons. For 25 of these 37 neurons (68%), the facilitation was significant (P less than 0.05) as evaluated by both criteria (number of action potentials and mean peak firing rate). Ten (21%) of the affected neurons had their visual response significantly inhibited (P less than 0.05). 5. Sixty percent (46/78) of the neurons were tested for the effect of eye movement on both visually elicited activity (visual stimulus contrast = 2 times threshold) and spontaneous activity (contrast = 0). Eye movement significantly affected the visual response of 23 (50%) of these neurons. However, spontaneous activity was significantly affected for only nine (20%) of these neurons. The interaction of the eye movement and visual signals was nonlinear. 6. Nine of 12 neurons (75%) tested had a directionally selective effect of eye movement on the visual response, with most (8/9) preferring the temporal ward direction.(ABSTRACT TRUNCATED AT 400 WORDS)


Sign in / Sign up

Export Citation Format

Share Document