scholarly journals Auditory Information Accelerates the Visuomotor Reaction Speed of Elite Badminton Players in Multisensory Environments

2021 ◽  
Vol 15 ◽  
Author(s):  
Thorben Hülsdünker ◽  
David Riedel ◽  
Hannes Käsbauer ◽  
Diemo Ruhnow ◽  
Andreas Mierau

Although vision is the dominating sensory system in sports, many situations require multisensory integration. Faster processing of auditory information in the brain may facilitate time-critical abilities such as reaction speed however previous research was limited by generic auditory and visual stimuli that did not consider audio-visual characteristics in ecologically valid environments. This study investigated the reaction speed in response to sport-specific monosensory (visual and auditory) and multisensory (audio-visual) stimulation. Neurophysiological analyses identified the neural processes contributing to differences in reaction speed. Nineteen elite badminton players participated in this study. In a first recording phase, the sound profile and shuttle speed of smash and drop strokes were identified on a badminton court using high-speed video cameras and binaural recordings. The speed and sound characteristics were transferred into auditory and visual stimuli and presented in a lab-based experiment, where participants reacted in response to sport-specific monosensory or multisensory stimulation. Auditory signal presentation was delayed by 26 ms to account for realistic audio-visual signal interaction on the court. N1 and N2 event-related potentials as indicators of auditory and visual information perception/processing, respectively were identified using a 64-channel EEG. Despite the 26 ms delay, auditory reactions were significantly faster than visual reactions (236.6 ms vs. 287.7 ms, p < 0.001) but still slower when compared to multisensory stimulation (224.4 ms, p = 0.002). Across conditions response times to smashes were faster when compared to drops (233.2 ms, 265.9 ms, p < 0.001). Faster reactions were paralleled by a lower latency and higher amplitude of the auditory N1 and visual N2 potentials. The results emphasize the potential of auditory information to accelerate the reaction time in sport-specific multisensory situations. This highlights auditory processes as a promising target for training interventions in racquet sports.

2019 ◽  
Author(s):  
Sebastian Schindler ◽  
Maximilian Bruchmann ◽  
Bettina Gathmann ◽  
robert.moeck ◽  
thomas straube

Emotional facial expressions lead to modulations of early event-related potentials (ERPs). However, it has so far remained unclear in how far these modulations represent face-specific effects rather than differences in low-level visual features, and to which extent they depend on available processing resources. To examine these questions, we conducted two preregistered independent experiments (N = 40 in each experiment) using different variants of a novel task which manipulates peripheral perceptual load across levels but keeps overall visual stimulation constant. Centrally, task-irrelevant angry, neutral and happy faces and their Fourier phase-scrambled versions, which preserved low-level visual features, were presented. The results of both studies showed load-independent P1 and N170 emotion effects. Importantly, we could confirm by using Bayesian analyses that these emotion effects were face-independent for the P1 but not for the N170 component. We conclude that firstly, ERP modulations during the P1 interval strongly depend on low-level visual information, while the emotional N170 modulation requires the processing of figural facial features. Secondly, both P1 and N170 modulations appear to be immune to a large range of variations in perceptual load.


2020 ◽  
Vol 30 (10) ◽  
pp. 5502-5516
Author(s):  
Chaim N Katz ◽  
Kramay Patel ◽  
Omid Talakoub ◽  
David Groppe ◽  
Kari Hoffman ◽  
...  

Abstract Event-related potentials (ERPs) are a commonly used electrophysiological signature for studying mesial temporal lobe (MTL) function during visual memory tasks. The ERPs associated with the onset of visual stimuli (image-onset) and eye movements (saccades and fixations) provide insights into the mechanisms of their generation. We hypothesized that since eye movements and image-onset provide MTL structures with salient visual information, perhaps they both engage similar neural mechanisms. To explore this question, we used intracranial electroencephalographic data from the MTLs of 11 patients with medically refractory epilepsy who participated in a visual search task. We characterized the electrophysiological responses of MTL structures to saccades, fixations, and image-onset. We demonstrated that the image-onset response is an evoked/additive response with a low-frequency power increase. In contrast, ERPs following eye movements appeared to arise from phase resetting of higher frequencies than the image-onset ERP. Intriguingly, this reset was associated with saccade onset and not termination (fixation), suggesting it is likely the MTL response to a corollary discharge, rather than a response to visual stimulation. We discuss the distinct mechanistic underpinnings of these responses which shed light on the underlying neural circuitry involved in visual memory processing.


2012 ◽  
Vol 25 (0) ◽  
pp. 170
Author(s):  
Georgiana Juravle ◽  
Tobias Heed ◽  
Charles Spence ◽  
Brigitte Roeder

Tactile information arriving at our sensory receptors is differentially processed over the various temporal phases of goal-directed movements. By using event-related potentials (ERPs), we investigated the neuronal correlates of tactile information processing during movement. Participants performed goal-directed reaches for an object placed centrally on the table in front of them. Tactile and visual stimuli were presented in separate trials during the different phases of the movement (i.e., preparation, execution, and post-movement). These stimuli were independently delivered to either the moving or the resting hand. In a control condition, the participants only performed the movement, while omission (movement-only) ERPs were recorded. Participants were told to ignore the presence or absence of any sensory events and solely concentrate on the execution of the movement. The results highlighted enhanced ERPs between 80 and 200 ms after tactile stimulation, and between 100 and 250 ms after visual stimulation. These modulations were greatest over the execution phase of the goal-directed movement, they were effector-based (i.e., significantly more negative for stimuli presented at the moving hand), and modality-independent (i.e., similar ERP enhancements were observed for both tactile and visual stimuli). The enhanced processing of sensory information over the execution phase of the movement suggests that incoming sensory information may be used for a potential adjustment of the current motor plan. Moreover, these results indicate a tight interaction between attentional mechanisms and the sensorimotor system.


2019 ◽  
Author(s):  
Milton A. V. Ávila ◽  
Rafael N. Ruggiero ◽  
João P. Leite ◽  
Lezio S. Bueno-Junior ◽  
Cristina M. Del-Ben

ABSTRACTAudiovisual integration may improve unisensory perceptual performance and learning. Interestingly, this integration may occur even when one of the sensory modalities is not conscious to the subject, e.g., semantic auditory information may impact nonconscious visual perception. Studies have shown that the flow of nonconscious visual information is mostly restricted to early cortical processing, without reaching higher-order areas such as the parieto-frontal network. Thus, because multisensory cortical interactions may already occur in early stages of processing, we hypothesized that nonconscious visual stimulation might facilitate auditory pitch learning. In this study we used a pitch learning paradigm, in which individuals had to identify six pitches in a scale with constant intervals of 50 cents. Subjects were assigned to one of three training groups: the test group (Auditory + congruent unconscious visual, AV), and two control groups (Auditory only, A, and Auditory + incongruent unconscious visual, AVi). Auditory-only tests were done before and after training in all groups. Electroencephalography (EEG) was recorded throughout the experiment. Results show that the test group (AV, with congruent nonconscious visual stimuli) performed better during the training, and showed a greater improvement from pre-to post-test. Control groups did not differ from one another. Changes in the AV group were mainly due to performances in the first and last pitches of the scale. We also observed consistent EEG patterns associated with this performance improvement in the AV group, especially maintenance of higher theta-band power after training in central and temporal areas, and stronger theta-band synchrony between visual and auditory cortices. Therefore, we show that nonconscious multisensory interactions are powerful enough to boost auditory perceptual learning, and that increased functional connectivity between early visual and auditory cortices after training might play a role in this effect. Moreover, we provide a methodological contribution for future studies on auditory perceptual learning, particularly those applied to relative and absolute pitch training.


2018 ◽  
Author(s):  
Chaim N. Katz ◽  
Kramay Patel ◽  
Omid Talakoub ◽  
David Groppe ◽  
Kari Hoffman ◽  
...  

The electrophysiological signatures of encoding and retrieval recorded from mesial temporal lobe (MTL) structures are observed as event related potentials (ERPs) during visual memory tasks. The waveforms of the ERPs associated with the onset of visual stimuli (image-onset) and eye movements (saccades and fixations) provide insights into the mechanisms of their generation. We hypothesized that since eye movements and image-onset (common methods of stimulus presentation when testing memory) provide MTL structures with salient visual information, that perhaps they both engage similar neural mechanisms. To explore this question, we used intracranial electroencephalographic (iEEG) data from the MTLs of 11 patients with medically refractory epilepsy who participated in a visual search task. We sought to characterize electrophysiological responses of MTL structures to saccades, fixations and image onset. We demonstrate that the image-onset response is an evoked/additive response with a low-frequency power increase and post-stimulus phase clustering. In contrast, ERPs following eye movements appeared to arise from phase resetting of higher frequencies than the image onset ERP. Intriguingly, this reset was associated with saccade onset and not saccade termination (fixation), suggesting it is likely the MTL response to a corollary discharge, rather than a response to visual stimulation - in stark contrast to the image onset response. The distinct mechanistic underpinnings of these two ERP may help guide future development of visual memory tasks.


2012 ◽  
Vol 107 (12) ◽  
pp. 3428-3432 ◽  
Author(s):  
P.-J. Hsieh ◽  
J. T. Colas ◽  
N. Kanwisher

Recent findings suggest that neural representations in early auditory cortex reflect not only the physical properties of a stimulus, but also high-level, top-down, and even cross-modal information. However, the nature of cross-modal information in auditory cortex remains poorly understood. Here, we used pattern analyses of fMRI data to ask whether early auditory cortex contains information about the visual environment. Our data show that 1) early auditory cortex contained information about a visual stimulus when there was no bottom-up auditory signal, and that 2) no influence of visual stimulation was observed in auditory cortex when visual stimuli did not provide a context relevant to audition. Our findings attest to the capacity of auditory cortex to reflect high-level, top-down, and cross-modal information and indicate that the spatial patterns of activation in auditory cortex reflect contextual/implied auditory information but not visual information per se.


2009 ◽  
Vol 23 (2) ◽  
pp. 63-76 ◽  
Author(s):  
Silke Paulmann ◽  
Sarah Jessen ◽  
Sonja A. Kotz

The multimodal nature of human communication has been well established. Yet few empirical studies have systematically examined the widely held belief that this form of perception is facilitated in comparison to unimodal or bimodal perception. In the current experiment we first explored the processing of unimodally presented facial expressions. Furthermore, auditory (prosodic and/or lexical-semantic) information was presented together with the visual information to investigate the processing of bimodal (facial and prosodic cues) and multimodal (facial, lexic, and prosodic cues) human communication. Participants engaged in an identity identification task, while event-related potentials (ERPs) were being recorded to examine early processing mechanisms as reflected in the P200 and N300 component. While the former component has repeatedly been linked to physical property stimulus processing, the latter has been linked to more evaluative “meaning-related” processing. A direct relationship between P200 and N300 amplitude and the number of information channels present was found. The multimodal-channel condition elicited the smallest amplitude in the P200 and N300 components, followed by an increased amplitude in each component for the bimodal-channel condition. The largest amplitude was observed for the unimodal condition. These data suggest that multimodal information induces clear facilitation in comparison to unimodal or bimodal information. The advantage of multimodal perception as reflected in the P200 and N300 components may thus reflect one of the mechanisms allowing for fast and accurate information processing in human communication.


2015 ◽  
Vol 27 (3) ◽  
pp. 492-508 ◽  
Author(s):  
Nicholas E. Myers ◽  
Lena Walther ◽  
George Wallis ◽  
Mark G. Stokes ◽  
Anna C. Nobre

Working memory (WM) is strongly influenced by attention. In visual WM tasks, recall performance can be improved by an attention-guiding cue presented before encoding (precue) or during maintenance (retrocue). Although precues and retrocues recruit a similar frontoparietal control network, the two are likely to exhibit some processing differences, because precues invite anticipation of upcoming information whereas retrocues may guide prioritization, protection, and selection of information already in mind. Here we explored the behavioral and electrophysiological differences between precueing and retrocueing in a new visual WM task designed to permit a direct comparison between cueing conditions. We found marked differences in ERP profiles between the precue and retrocue conditions. In line with precues primarily generating an anticipatory shift of attention toward the location of an upcoming item, we found a robust lateralization in late cue-evoked potentials associated with target anticipation. Retrocues elicited a different pattern of ERPs that was compatible with an early selection mechanism, but not with stimulus anticipation. In contrast to the distinct ERP patterns, alpha-band (8–14 Hz) lateralization was indistinguishable between cue types (reflecting, in both conditions, the location of the cued item). We speculate that, whereas alpha-band lateralization after a precue is likely to enable anticipatory attention, lateralization after a retrocue may instead enable the controlled spatiotopic access to recently encoded visual information.


Sign in / Sign up

Export Citation Format

Share Document