scholarly journals Better ears with eyes open: effects of multisensory stimulation with nonconscious visual stimuli on auditory learning

2019 ◽  
Author(s):  
Milton A. V. Ávila ◽  
Rafael N. Ruggiero ◽  
João P. Leite ◽  
Lezio S. Bueno-Junior ◽  
Cristina M. Del-Ben

ABSTRACTAudiovisual integration may improve unisensory perceptual performance and learning. Interestingly, this integration may occur even when one of the sensory modalities is not conscious to the subject, e.g., semantic auditory information may impact nonconscious visual perception. Studies have shown that the flow of nonconscious visual information is mostly restricted to early cortical processing, without reaching higher-order areas such as the parieto-frontal network. Thus, because multisensory cortical interactions may already occur in early stages of processing, we hypothesized that nonconscious visual stimulation might facilitate auditory pitch learning. In this study we used a pitch learning paradigm, in which individuals had to identify six pitches in a scale with constant intervals of 50 cents. Subjects were assigned to one of three training groups: the test group (Auditory + congruent unconscious visual, AV), and two control groups (Auditory only, A, and Auditory + incongruent unconscious visual, AVi). Auditory-only tests were done before and after training in all groups. Electroencephalography (EEG) was recorded throughout the experiment. Results show that the test group (AV, with congruent nonconscious visual stimuli) performed better during the training, and showed a greater improvement from pre-to post-test. Control groups did not differ from one another. Changes in the AV group were mainly due to performances in the first and last pitches of the scale. We also observed consistent EEG patterns associated with this performance improvement in the AV group, especially maintenance of higher theta-band power after training in central and temporal areas, and stronger theta-band synchrony between visual and auditory cortices. Therefore, we show that nonconscious multisensory interactions are powerful enough to boost auditory perceptual learning, and that increased functional connectivity between early visual and auditory cortices after training might play a role in this effect. Moreover, we provide a methodological contribution for future studies on auditory perceptual learning, particularly those applied to relative and absolute pitch training.


2021 ◽  
Vol 15 ◽  
Author(s):  
Thorben Hülsdünker ◽  
David Riedel ◽  
Hannes Käsbauer ◽  
Diemo Ruhnow ◽  
Andreas Mierau

Although vision is the dominating sensory system in sports, many situations require multisensory integration. Faster processing of auditory information in the brain may facilitate time-critical abilities such as reaction speed however previous research was limited by generic auditory and visual stimuli that did not consider audio-visual characteristics in ecologically valid environments. This study investigated the reaction speed in response to sport-specific monosensory (visual and auditory) and multisensory (audio-visual) stimulation. Neurophysiological analyses identified the neural processes contributing to differences in reaction speed. Nineteen elite badminton players participated in this study. In a first recording phase, the sound profile and shuttle speed of smash and drop strokes were identified on a badminton court using high-speed video cameras and binaural recordings. The speed and sound characteristics were transferred into auditory and visual stimuli and presented in a lab-based experiment, where participants reacted in response to sport-specific monosensory or multisensory stimulation. Auditory signal presentation was delayed by 26 ms to account for realistic audio-visual signal interaction on the court. N1 and N2 event-related potentials as indicators of auditory and visual information perception/processing, respectively were identified using a 64-channel EEG. Despite the 26 ms delay, auditory reactions were significantly faster than visual reactions (236.6 ms vs. 287.7 ms, p < 0.001) but still slower when compared to multisensory stimulation (224.4 ms, p = 0.002). Across conditions response times to smashes were faster when compared to drops (233.2 ms, 265.9 ms, p < 0.001). Faster reactions were paralleled by a lower latency and higher amplitude of the auditory N1 and visual N2 potentials. The results emphasize the potential of auditory information to accelerate the reaction time in sport-specific multisensory situations. This highlights auditory processes as a promising target for training interventions in racquet sports.



i-Perception ◽  
2018 ◽  
Vol 9 (6) ◽  
pp. 204166951881570
Author(s):  
Sachiyo Ueda ◽  
Ayane Mizuguchi ◽  
Reiko Yakushijin ◽  
Akira Ishiguchi

To overcome limitations in perceptual bandwidth, humans condense various features of the environment into summary statistics. Variance constitutes indices that represent diversity within categories and also the reliability of the information regarding that diversity. Studies have shown that humans can efficiently perceive variance for visual stimuli; however, to enhance perception of environments, information about the external world can be obtained from multisensory modalities and integrated. Consequently, this study investigates, through two experiments, whether the precision of variance perception improves when visual information (size) and corresponding auditory information (pitch) are integrated. In Experiment 1, we measured the correspondence between visual size and auditory pitch for each participant by using adjustment measurements. The results showed a linear relationship between size and pitch—that is, the higher the pitch, the smaller the corresponding circle. In Experiment 2, sequences of visual stimuli were presented both with and without linked auditory tones, and the precision of perceived variance in size was measured. We consequently found that synchronized presentation of audio and visual stimuli that have the same variance improves the precision of perceived variance in size when compared with visual-only presentation. This suggests that audiovisual information may be automatically integrated in variance perception.



2015 ◽  
Vol 3 (1-2) ◽  
pp. 88-101 ◽  
Author(s):  
Kathleen M. Einarson ◽  
Laurel J. Trainor

Recent work examined five-year-old children’s perceptual sensitivity to musical beat alignment. In this work, children watched pairs of videos of puppets drumming to music with simple or complex metre, where one puppet’s drumming sounds (and movements) were synchronized with the beat of the music and the other drummed with incorrect tempo or phase. The videos were used to maintain children’s interest in the task. Five-year-olds were better able to detect beat misalignments in simple than complex metre music. However, adults can perform poorly when attempting to detect misalignment of sound and movement in audiovisual tasks, so it is possible that the moving stimuli actually hindered children’s performance. Here we compared children’s sensitivity to beat misalignment in conditions with dynamic visual movement versus still (static) visual images. Eighty-four five-year-old children performed either the same task as described above or a task that employed identical auditory stimuli accompanied by a motionless picture of the puppet with the drum. There was a significant main effect of metre type, replicating the finding that five-year-olds are better able to detect beat misalignment in simple metre music. There was no main effect of visual condition. These results suggest that, given identical auditory information, children’s ability to judge beat misalignment in this task is not affected by the presence or absence of dynamic visual stimuli. We conclude that at five years of age, children can tell if drumming is aligned to the musical beat when the music has simple metric structure.



2007 ◽  
Vol 98 (4) ◽  
pp. 2399-2413 ◽  
Author(s):  
Vivian M. Ciaramitaro ◽  
Giedrius T. Buračas ◽  
Geoffrey M. Boynton

Attending to a visual or auditory stimulus often requires irrelevant information to be filtered out, both within the modality attended and in other modalities. For example, attentively listening to a phone conversation can diminish our ability to detect visual events. We used functional magnetic resonance imaging (fMRI) to examine brain responses to visual and auditory stimuli while subjects attended visual or auditory information. Although early cortical areas are traditionally considered unimodal, we found that brain responses to the same ignored information depended on the modality attended. In early visual area V1, responses to ignored visual stimuli were weaker when attending to another visual stimulus, compared with attending to an auditory stimulus. The opposite was true in more central visual area MT+, where responses to ignored visual stimuli were weaker when attending to an auditory stimulus. Furthermore, fMRI responses to the same ignored visual information depended on the location of the auditory stimulus, with stronger responses when the attended auditory stimulus shared the same side of space as the ignored visual stimulus. In early auditory cortex, responses to ignored auditory stimuli were weaker when attending a visual stimulus. A simple parameterization of our data can describe the effects of redirecting attention across space within the same modality (spatial attention) or across modalities (cross-modal attention), and the influence of spatial attention across modalities (cross-modal spatial attention). Our results suggest that the representation of unattended information depends on whether attention is directed to another stimulus in the same modality or the same region of space.



2005 ◽  
Vol 17 (3) ◽  
pp. 377-391 ◽  
Author(s):  
Nick E. Barraclough* ◽  
Dengke Xiao* ◽  
Chris I. Baker ◽  
Mike W. Oram ◽  
David I. Perrett

Processing of complex visual stimuli comprising facial movements, hand actions, and body movements is known to occur in the superior temporal sulcus (STS) of humans and nonhuman primates. The STS is also thought to play a role in the integration of multimodal sensory input. We investigated whether STS neurons coding the sight of actions also integrated the sound of those actions. For 23% of neurons responsive to the sight of an action, the sound of that action significantly modulated the visual response. The sound of the action increased or decreased the visually evoked response for an equal number of neurons. In the neurons whose visual response was increased by the addition of sound (but not those neurons whose responses were decreased), the audiovisual integration was dependent upon the sound of the action matching the sight of the action. These results suggest that neurons in the STS form multisensory representations of observed actions.



2019 ◽  
Author(s):  
Martin A. Spacek ◽  
Gregory Born ◽  
Davide Crombie ◽  
Yannik Bauer ◽  
Xinyu Liu ◽  
...  

AbstractNeurons in the dorsolateral geniculate nucleus (dLGN) of the thalamus are contacted by a large number of feedback synapses from cortex, whose role in visual processing is poorly understood. Past studies investigating this role have mostly used simple visual stimuli and anesthetized animals, but corticothalamic (CT) feedback might be particularly relevant during processing of complex visual stimuli, and its effects might depend on behavioral state. Here, we find that CT feedback robustly modulates responses to naturalistic movie clips by increasing response gain and promoting tonic firing mode. Compared to these robust effects for naturalistic movies, CT feedback effects on firing rates were less consistent for simple grating stimuli, likely related to differences in spatial context. Finally, while CT feedback and locomotion affected dLGN responses in similar ways, we found their effects to be largely independent. We propose that CT feedback and behavioral state use separate circuits to modulate visual information on its way to cortex in a context-dependent manner.



2002 ◽  
Vol 14 (3) ◽  
pp. 420-429 ◽  
Author(s):  
Paul J. Laurienti ◽  
Jonathan H. Burdette ◽  
Mark T. Wallace ◽  
Yi-Fen Yen ◽  
Aaron S. Field ◽  
...  

Visual and auditory cortices traditionally have been considered to be “modality-specific.” Thus, their activity has been thought to be unchanged by information in other sensory modalities. However, using functional magnetic resonance imaging (fMRI), the present experiments revealed that ongoing activity in the visual cortex could be modulated by auditory information and ongoing activity in the auditory cortex could be modulated by visual information. In both cases, this cross-modal modulation of activity took the form of deactivation. Yet, the deactivation response was not evident in either cortical area during the paired presentation of visual and auditory stimuli. These data suggest that cross-modal inhibitory processes operate within traditional modality-specific cortices and that these processes can be switched on or off in different circumstances.



2016 ◽  
Vol 45 (2) ◽  
pp. 204-215 ◽  
Author(s):  
Janne Weijkamp ◽  
Makiko Sadakata

Individuals with more musical training repeatedly demonstrate enhanced auditory perception abilities. The current study examined how these enhanced auditory skills interact with attention to affective audio-visual stimuli. A total of 16 participants with more than 5 years of musical training (musician group) and 16 participants with less than 2 years of musical training (non-musician group) took part in a version of the audio-visual emotional Stroop test, using happy, neutral, and sad emotions. Participants were presented with congruent and incongruent combinations of face and voice stimuli while judging the emotion of either the face or the voice. As predicted, musicians were less susceptible to interference from visual information on auditory emotion judgments than non-musicians, as evidenced by musicians being more accurate when judging auditory emotions when presented with congruent and incongruent visual information. Musicians were also more accurate than non-musicians at identifying visual emotions when presented with concurrent auditory information. Thus, musicians were less influenced by congruent/incongruent information in a non-target modality compared to non-musicians. The results suggest that musical training influences audio-visual information processing.



2019 ◽  
Author(s):  
M. J. Wolff ◽  
G. Kandemir ◽  
M. G. Stokes ◽  
E. G. Akyürek

AbstractIt is unclear to what extent sensory processing areas are involved in the maintenance of sensory information in working memory (WM). Previous studies have thus far relied on finding neural activity in the corresponding sensory cortices, neglecting potential activity-silent mechanisms such as connectivity-dependent encoding. It has recently been found that visual stimulation during visual WM maintenance reveals WM-dependent changes through a bottom-up neural response. Here, we test whether this impulse response is uniquely visual and sensory-specific. Human participants (both sexes) completed visual and auditory WM tasks while electroencephalography was recorded. During the maintenance period, the WM network was perturbed serially with fixed and task-neutral auditory and visual stimuli. We show that a neutral auditory impulse-stimulus presented during the maintenance of a pure tone resulted in a WM-dependent neural response, providing evidence for the auditory counterpart to the visual WM findings reported previously. Interestingly, visual stimulation also resulted in an auditory WM-dependent impulse response, implicating the visual cortex in the maintenance of auditory information, either directly, or indirectly as a pathway to the neural auditory WM representations elsewhere. In contrast, during visual WM maintenance only the impulse response to visual stimulation was content-specific, suggesting that visual information is maintained in a sensory-specific neural network, separated from auditory processing areas.Significance StatementWorking memory is a crucial component of intelligent, adaptive behaviour. Our understanding of the neural mechanisms that support it has recently shifted: rather than being dependent on an unbroken chain of neural activity, working memory may rely on transient changes in neuronal connectivity, which can be maintained efficiently in activity-silent brain states. Previous work using a visual impulse stimulus to perturb the memory network has implicated such silent states in the retention of line orientations in visual working memory. Here, we show that auditory working memory similarly retains auditory information. We also observed a sensory-specific impulse response in visual working memory, while auditory memory responded bi-modally to both visual and auditory impulses, possibly reflecting visual dominance of working memory.



2021 ◽  
Author(s):  
Aaron M. Williams ◽  
Christopher F. Angeloni ◽  
Maria Neimark Geffen

In everyday life, we integrate visual and auditory information in routine tasks such as navigation and communication. While it is known that concurrent sound can improve visual perception, the neuronal correlates of this audiovisual integration are not fully understood. Specifically, it remains unknown whether improvement of the detection and discriminability of visual stimuli due to sound is reflected in the neuronal firing patterns in the primary visual cortex (V1). Furthermore, presentation of the sound can induce movement in the subject, but little is understood about whether and how sound-induced movement contributes to V1 neuronal activity. Here, we investigated how sound and movement interact to modulate V1 visual responses in awake, head-fixed mice and whether this interaction improves neuronal encoding of the visual stimulus. We presented visual drifting gratings with and without simultaneous auditory white noise to awake mice while recording mouse movement and V1 neuronal activity. Sound modulated the light-evoked activity of 80% of light-responsive neurons, with 95% of neurons exhibiting increased activity when the auditory stimulus was present. Sound consistently induced movement. However, a generalized linear model revealed that sound and movement had distinct and complementary effects of the neuronal visual responses. Furthermore, decoding of the visual stimulus from the neuronal activity was improved with sound, an effect that persisted even when controlling for movement. These results demonstrate that sound and movement modulate visual responses in complementary ways, resulting in improved neuronal representation of the visual stimulus. This study clarifies the role of movement as a potential confound in neuronal audiovisual responses, and expands our knowledge of how multi-modal processing is mediated at a neuronal level in the awake brain.



Sign in / Sign up

Export Citation Format

Share Document