auditory attention
Recently Published Documents


TOTAL DOCUMENTS

586
(FIVE YEARS 164)

H-INDEX

43
(FIVE YEARS 6)

2022 ◽  
Author(s):  
Simon Geirnaert ◽  
Tom Francart ◽  
Alexander Bertrand

The goal of auditory attention decoding (AAD) is to determine to which speaker out of multiple competing speakers a listener is attending based on the brain signals recorded via, e.g., electroencephalography (EEG). AAD algorithms are a fundamental building block of so-called neuro-steered hearing devices that would allow identifying the speaker that should be amplified based on the brain activity. A common approach is to train a subject-specific decoder that reconstructs the amplitude envelope of the attended speech signal. However, training this decoder requires a dedicated 'ground-truth' EEG recording of the subject under test, during which the attended speaker is known. Furthermore, this decoder remains fixed during operation and can thus not adapt to changing conditions and situations. Therefore, we propose an online time-adaptive unsupervised stimulus reconstruction method that continuously and automatically adapts over time when new EEG and audio data are streaming in. The adaptive decoder does not require ground-truth attention labels obtained from a training session with the end-user, and instead can be initialized with a generic subject-independent decoder or even completely random values. We propose two different implementations: a sliding window and recursive implementation, which we extensively validate based on multiple performance metrics on three independent datasets. We show that the proposed time-adaptive unsupervised decoder outperforms a time-invariant supervised decoder, representing an important step towards practically applicable AAD algorithms for neuro-steered hearing devices.


Author(s):  
Chirag Ahuja ◽  
Divyashikha Setia
Keyword(s):  

2021 ◽  
Vol 12 ◽  
Author(s):  
Michel Bürgel ◽  
Lorenzo Picinali ◽  
Kai Siedenburg

Listeners can attend to and track instruments or singing voices in complex musical mixtures, even though the acoustical energy of sounds from individual instruments may overlap in time and frequency. In popular music, lead vocals are often accompanied by sound mixtures from a variety of instruments, such as drums, bass, keyboards, and guitars. However, little is known about how the perceptual organization of such musical scenes is affected by selective attention, and which acoustic features play the most important role. To investigate these questions, we explored the role of auditory attention in a realistic musical scenario. We conducted three online experiments in which participants detected single cued instruments or voices in multi-track musical mixtures. Stimuli consisted of 2-s multi-track excerpts of popular music. In one condition, the target cue preceded the mixture, allowing listeners to selectively attend to the target. In another condition, the target was presented after the mixture, requiring a more “global” mode of listening. Performance differences between these two conditions were interpreted as effects of selective attention. In Experiment 1, results showed that detection performance was generally dependent on the target’s instrument category, but listeners were more accurate when the target was presented prior to the mixture rather than the opposite. Lead vocals appeared to be nearly unaffected by this change in presentation order and achieved the highest accuracy compared with the other instruments, which suggested a particular salience of vocal signals in musical mixtures. In Experiment 2, filtering was used to avoid potential spectral masking of target sounds. Although detection accuracy increased for all instruments, a similar pattern of results was observed regarding the instrument-specific differences between presentation orders. In Experiment 3, adjusting the sound level differences between the targets reduced the effect of presentation order, but did not affect the differences between instruments. While both acoustic manipulations facilitated the detection of targets, vocal signals remained particularly salient, which suggest that the manipulated features did not contribute to vocal salience. These findings demonstrate that lead vocals serve as robust attractor points of auditory attention regardless of the manipulation of low-level acoustical cues.


Author(s):  
Lisa Straetmans ◽  
B. Holtze ◽  
Stefan Debener ◽  
Manuela Jaeger ◽  
Bojana Mirkovic

Abstract Objective. Neuro-steered assistive technologies have been suggested to offer a major advancement in future devices like neuro-steered hearing aids. Auditory attention decoding methods would in that case allow for identification of an attended speaker within complex auditory environments, exclusively from neural data. Decoding the attended speaker using neural information has so far only been done in controlled laboratory settings. Yet, it is known that ever-present factors like distraction and movement are reflected in the neural signal parameters related to attention. Approach. Thus, in the current study we applied a two-competing speaker paradigm to investigate performance of a commonly applied EEG-based auditory attention decoding (AAD) model outside of the laboratory during leisure walking and distraction. Unique environmental sounds were added to the auditory scene and served as distractor events. Main results. The current study shows, for the first time, that the attended speaker can be accurately decoded during natural movement. At a temporal resolution of as short as 5-seconds and without artifact attenuation, decoding was found to be significantly above chance level. Further, as hypothesized, we found a decrease in attention to the to-be-attended and the to-be-ignored speech stream after the occurrence of a salient event. Additionally, we demonstrate that it is possible to predict neural correlates of distraction with a computational model of auditory saliency based on acoustic features. Conclusion. Taken together, our study shows that auditory attention tracking outside of the laboratory in ecologically valid conditions is feasible and a step towards the development of future neural-steered hearing aids.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Wu Yufei ◽  
Wang Dandan ◽  
Zhu Yanwei

Digital sensors use biotechnology and information processing technology to strengthen the processing of relevant visual and auditory information, which is helpful to ensure that the receiver can obtain more accurate information, so as to improve the learning effect and reduce the impact on the environment. This paper designs an experiment to explore the role of digital sensors in language audio-visual teaching, which provides a reference for the application of digital sensors in the future. The impulse response function in sensor technology is introduced. The speech time domain envelope and time-varying mouth area of the sensor device are calculated. The auditory attention transfer detection based on line of sight rotation estimation is carried out through the auditory attention decoding fusion technology and the sensor auditory attention conversion detection method. At the same time, the characteristic of sensor heog signal is analyzed. The results show that the algorithm proposed in this paper has good results.


2021 ◽  
Author(s):  
Ryan J Morrill ◽  
James Bigelow ◽  
Jefferson DeKloe ◽  
Andrea R Hasenstaub

In everyday behavior, sensory systems are in constant competition for attentional resources, but the cellular and circuit-level mechanisms of modality-selective attention remain largely uninvestigated. We conducted translaminar recordings in mouse auditory cortex (AC) during an audiovisual (AV) attention shifting task. Attending to sound elements in an AV stream reduced both pre-stimulus and stimulus-evoked spiking activity, primarily in deep layer neurons. Despite reduced spiking, stimulus decoder accuracy was preserved, suggesting improved sound encoding efficiency. Similarly, task-irrelevant probe stimuli during intertrial intervals evoked fewer spikes without impairing stimulus encoding, indicating that these attention influences generalized beyond training stimuli. Importantly, these spiking reductions predicted trial-to-trial behavioral accuracy during auditory attention, but not visual attention. Together, these findings suggest auditory attention facilitates sound discrimination by filtering sound-irrelevant spiking in AC, and that the deepest cortical layers may serve as a hub for integrating extramodal contextual information.


2021 ◽  
pp. 095679762110164
Author(s):  
Ronan McGarrigle ◽  
Sarah Knight ◽  
Benjamin W. Y. Hornsby ◽  
Sven Mattys

Listening-related fatigue is a potentially serious negative consequence of an aging auditory and cognitive system. However, the impact of age on listening-related fatigue and the factors underpinning any such effect remain unexplored. Using data from a large sample of adults ( N = 281), we conducted a conditional process analysis to examine potential mediators and moderators of age-related changes in listening-related fatigue. Mediation analyses revealed opposing effects of age on listening-related fatigue: Older adults with greater perceived hearing impairment tended to report increased listening-related fatigue. However, aging was otherwise associated with decreased listening-related fatigue via reductions in both mood disturbance and sensory-processing sensitivity. Results suggested that the effect of auditory attention ability on listening-related fatigue was moderated by sensory-processing sensitivity; for individuals with high sensory-processing sensitivity, better auditory attention ability was associated with increased fatigue. These findings shed light on the perceptual, cognitive, and psychological factors underlying age-related changes in listening-related fatigue.


2021 ◽  
Author(s):  
Winko W. An ◽  
Alexander Pei ◽  
Abigail L. Noyce ◽  
Barbara Shinn-Cunningham

2021 ◽  
Author(s):  
Enze Su ◽  
Siqi Cai ◽  
Peiwen Li ◽  
Longhan Xie ◽  
Haizhou Li
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document