babble noise
Recently Published Documents


TOTAL DOCUMENTS

50
(FIVE YEARS 17)

H-INDEX

7
(FIVE YEARS 0)

2021 ◽  
Vol 15 ◽  
Author(s):  
Liping Zhang ◽  
Friederike Schlaghecken ◽  
James Harte ◽  
Katherine L. Roberts

ObjectivesAuditory perceptual learning studies tend to focus on the nature of the target stimuli. However, features of the background noise can also have a significant impact on the amount of benefit that participants obtain from training. This study explores whether perceptual learning of speech in background babble noise generalizes to other, real-life environmental background noises (car and rain), and if the benefits are sustained over time.DesignNormal-hearing native English speakers were randomly assigned to a training (n = 12) or control group (n = 12). Both groups completed a pre- and post-test session in which they identified Bamford-Kowal-Bench (BKB) target words in babble, car, or rain noise. The training group completed speech-in-babble noise training on three consecutive days between the pre- and post-tests. A follow up session was conducted between 8 and 18 weeks after the post-test session (training group: n = 9; control group: n = 7).ResultsParticipants who received training had significantly higher post-test word identification accuracy than control participants for all three types of noise, although benefits were greatest for the babble noise condition and weaker for the car- and rain-noise conditions. Both training and control groups maintained their pre- to post-test improvement over a period of several weeks for speech in babble noise, but returned to pre-test accuracy for speech in car and rain noise.ConclusionThe findings show that training benefits can show some generalization from speech-in-babble noise to speech in other types of environmental noise. Both groups sustained their learning over a period of several weeks for speech-in-babble noise. As the control group received equal exposure to all three noise types, the sustained learning with babble noise, but not other noises, implies that a structural feature of babble noise was conducive to the sustained improvement. These findings emphasize the importance of considering the background noise as well as the target stimuli in auditory perceptual learning studies.


PLoS ONE ◽  
2021 ◽  
Vol 16 (2) ◽  
pp. e0246842
Author(s):  
Joseph C. Toscano ◽  
Cheyenne M. Toscano

Face masks are an important tool for preventing the spread of COVID-19. However, it is unclear how different types of masks affect speech recognition in different levels of background noise. To address this, we investigated the effects of four masks (a surgical mask, N95 respirator, and two cloth masks) on recognition of spoken sentences in multi-talker babble. In low levels of background noise, masks had little to no effect, with no more than a 5.5% decrease in mean accuracy compared to a no-mask condition. In high levels of noise, mean accuracy was 2.8-18.2% lower than the no-mask condition, but the surgical mask continued to show no significant difference. The results demonstrate that different types of masks generally yield similar accuracy in low levels of background noise, but differences between masks become more apparent in high levels of noise.


2021 ◽  
Vol 11 (2) ◽  
pp. 277
Author(s):  
Miseung Koo ◽  
Jihui Jeon ◽  
Hwayoung Moon ◽  
Myungwhan Suh ◽  
Junho Lee ◽  
...  

This preliminary study assessed the effects of noise and stimulus presentation order on recall of spoken words and recorded pupil sizes while normal-hearing listeners were trying to encode a series of words for a subsequent recall task. In three listening conditions (stationary noise in Experiment 1; quiet versus four-talker babble in Experiment 2), participants were assigned to remember as many words as possible to recall them in any order after each list of seven sentences. In the two noise conditions, lists of sentences fixed at 65 dB SPL were presented at an easily audible level via a loudspeaker. Reading span (RS) scores were used as a grouping variable, based on a median split. The primacy effect was present apart from the noise interference, and the high-RS group significantly outperformed the low-RS group at free recall measured in the quiet and four-talker babble noise conditions. RS scores were positively correlated with free-recall scores. In both quiet and four-talker babble noise conditions, sentence baselines after correction to the initial stimulus baseline increased significantly with increasing memory load. Larger sentence baselines but smaller peak pupil dilations seemed to be associated with noise interruption. The analysis method of pupil dilation used in this study is likely to provide a more thorough understanding of how listeners respond to a later recall task in comparison with previously used methods. Further studies are needed to confirm the applicability of our method in people with impaired hearing using multiple repetitions to estimate the allocation of relevant cognitive resources.


2021 ◽  
Vol 12 ◽  
Author(s):  
Margot Buyle ◽  
Viktoria Azoidou ◽  
Marousa Pavlou ◽  
Vincent Van Rompaey ◽  
Doris-Eva Bamiou

Background: The ageing process may degrade an individual's balance control, hearing capacity, and cognitive function. Older adults perform worse on simultaneously executed balance and secondary tasks (i.e., dual-task performance) than younger adults and may be more vulnerable to auditory distraction.Aim: The purpose of this study was to determine the effect of passive listening on functional gait in healthy older vs. younger adults, and to investigate the effect of age, functional gait, hearing ability and cognitive functioning on dual-task performance.Methods: Twenty young and 20 older healthy adults were recruited. Functional gait (Functional Gait Assessment in silent and noisy condition), hearing function (audiogram; Speech in Babble test), and cognitive ability (Cambridge Neuropsychological Test Automated Battery) were measured.Results: Overall, a significant difference between functional gait performance in silent vs. noisy conditions was found (p = 0.022), with no significant difference in dual-task cost between the two groups (p = 0.11). Correlations were found between increasing age, worse functional gait performance, poorer hearing capacity and lower performance on cognitive function tasks. Interestingly, worse performance on attention tasks appeared to be associated with a worse functional gait performance in the noisy condition.Conclusion: Passive listening to multi-talker babble noise can affect functional gait in both young and older adults. This effect could result from the cognitive load of the babble noise, due to the engagement of attention networks by the unattended speech.


2021 ◽  
Vol 2021 (2) ◽  
pp. 130-150
Author(s):  
Yuchen Liu ◽  
Ziyu Xiang ◽  
Eun Ji Seong ◽  
Apu Kapadia ◽  
Donald S. Williamson

Abstract Voice-activated commands have become a key feature of popular devices such as smartphones, home assistants, and wearables. For convenience, many people configure their devices to be ‘always on’ and listening for voice commands from the user using a trigger phrase such as “Hey Siri,” “Okay Google,” or “Alexa.” However, false positives for these triggers often result in privacy violations with conversations being inadvertently uploaded to the cloud. In addition, malware that can record one’s conversations remains a signifi-cant threat to privacy. Unlike with cameras, which people can physically obscure and be assured of their privacy, people do not have a way of knowing whether their microphone is indeed off and are left with no tangible defenses against voice based attacks. We envision a general-purpose physical defense that uses a speaker to inject specialized obfuscating ‘babble noise’ into the microphones of devices to protect against automated and human based attacks. We present a comprehensive study of how specially crafted, personalized ‘babble’ noise (‘MyBabble’) can be effective at moderate signal-to-noise ratios and can provide a viable defense against microphone based eavesdropping attacks.


Author(s):  
Hanani Abdul Manan ◽  
Noorazrul Azmie Yahya ◽  
Ahmad Nazlim Yusoff

2021 ◽  
Vol 25 ◽  
pp. 233121652110237
Author(s):  
Mengfan Wu ◽  
Oscar M. Cañete ◽  
Jesper Hvass Schmidt ◽  
Michal Fereczkowski ◽  
Tobias Neher

Hearing aid (HA) users differ greatly in their speech-in-noise (SIN) outcomes. This could be because the degree to which current HA fittings can address individual listening needs differs across users and listening situations. In two earlier studies, an auditory test battery and a data-driven method were developed for classifying HA candidates into four distinct auditory profiles differing in audiometric hearing loss and suprathreshold hearing abilities. This study explored aided SIN outcome for three of these profiles in different noise scenarios. Thirty-one older habitual HA users and six young normal-hearing listeners participated. Two SIN tasks were administered: a speech recognition task and a “just follow conversation” task requiring the participants to self-adjust the target-speech level. Three noise conditions were tested: stationary speech-shaped noise, speech-shaped babble noise, and speech-shaped babble noise with competing dialogues. Each HA user was fitted with three HAs from different manufacturers using their recommended procedures. Real-ear measurements were performed to document the final gain settings. The results showed that HA users with mild hearing deficits performed better than HA users with pronounced hearing deficits on the speech recognition task but not the just follow conversation task. Moreover, participants with pronounced hearing deficits obtained different SIN outcomes with the tested HAs, which appeared to be related to differences in HA gain. Overall, these findings imply that current proprietary fitting strategies are limited in their ability to ensure good SIN outcomes, especially for users with pronounced hearing deficits, for whom the choice of device seems most consequential.


2020 ◽  
Vol 21 (6) ◽  
pp. 527-544
Author(s):  
H. C. Stronks ◽  
J. J. Briaire ◽  
J. H. M. Frijns

Abstract Cochlear implant (CI) users have more difficulty understanding speech in temporally modulated noise than in steady-state (SS) noise. This is thought to be caused by the limited low-frequency information that CIs provide, as well as by the envelope coding in CIs that discards the temporal fine structure (TFS). Contralateral amplification with a hearing aid, referred to as bimodal hearing, can potentially provide CI users with TFS cues to complement the envelope cues provided by the CI signal. In this study, we investigated whether the use of a CI alone provides access to only envelope cues and whether acoustic amplification can provide additional access to TFS cues. To this end, we evaluated speech recognition in bimodal listeners, using SS noise and two amplitude-modulated noise types, namely babble noise and amplitude-modulated steady-state (AMSS) noise. We hypothesized that speech recognition in noise depends on the envelope of the noise, but not on its TFS when listening with a CI. Secondly, we hypothesized that the amount of benefit gained by the addition of a contralateral hearing aid depends on both the envelope and TFS of the noise. The two amplitude-modulated noise types decreased speech recognition more effectively than SS noise. Against expectations, however, we found that babble noise decreased speech recognition more effectively than AMSS noise in the CI-only condition. Therefore, we rejected our hypothesis that TFS is not available to CI users. In line with expectations, we found that the bimodal benefit was highest in babble noise. However, there was no significant difference between the bimodal benefit obtained in SS and AMSS noise. Our results suggest that a CI alone can provide TFS cues and that bimodal benefits in noise depend on TFS, but not on the envelope of the noise.


2020 ◽  
pp. 030573562095361
Author(s):  
Ebtesam Sajjadi ◽  
Ali Mohammadzadeh ◽  
Nushin Sayadi ◽  
Ahmadreza Nazeri ◽  
Seyyed Mehdi Tabatabai

Everyday communication mostly occurs in the presence of various background noises and competing talkers. Studies have shown that musical training could have a positive effect on auditory processing, particularly in challenging listening situations. To our knowledge, no groups have specifically studied the advantage of musical training on perception of consonants in the presence of background noise. We hypothesized that musician advantage in speech in noise processing may also result in enhanced perception of speech units such as consonants in noise. Therefore, this study aimed to compare the recognition of stops and fricatives, which constitute the highest number of Persian consonants, in the presence of 12-talker babble noise between musicians and non-musicians. For this purpose, stops and fricatives presented in the consonant-vowel-consonant format and embedded in three signal-to-noise ratios of 0, −5, and −10 dB. The study was conducted on 40 young listeners (20 musicians and 20 non-musicians) with normal hearing. Our outcome indicated that musicians outperformed the non-musicians in recognition of stops and fricatives in all three signal-to-noise ratios. These findings provide important evidence about the impact of musical instruction on processing of consonants and highlight the role of musical training on perceptual abilities.


2020 ◽  
Vol 40 (3) ◽  
pp. 300-325
Author(s):  
Lian van Berkel-van Hoof ◽  
Daan Hermans ◽  
Harry Knoors ◽  
Ludo Verhoeven

Previous research found a beneficial effect of augmentative signs (signs from a sign language used alongside speech) on spoken word learning by signing deaf and hard-of-hearing (DHH) children. The present study compared oral DHH children, and hearing children in a condition with babble noise in order to investigate whether prolonged experience with limited auditory access is required for a sign effect to occur. Nine- to 11-year-old children participated in a word learning task in which half of the words were presented with an augmentative sign. Non-signing DHH children ( N = 19) were trained in normal sound, whereas a control group of hearing peers ( N = 38) were trained in multi-speaker babble noise. The researchers also measured verbal short-term memory (STM). For the DHH children, there was a sign effect on speed of spoken word recognition, but not accuracy, and no interaction between the sign effect in reaction times and verbal STM. The hearing children showed no sign effect for either speed or accuracy. These results suggest that not necessarily sign language knowledge, but rather prolonged experience with limited auditory access is required for children to benefit from signs for spoken word learning regardless of children’s verbal STM.


Sign in / Sign up

Export Citation Format

Share Document