scholarly journals Effects of Additional Low-Pass–Filtered Speech on Listening Effort for Noise-Band–Vocoded Speech in Quiet and in Noise

2019 ◽  
Vol 40 (1) ◽  
pp. 3-17 ◽  
Author(s):  
Carina Pals ◽  
Anastasios Sarampalis ◽  
Mart van Dijk ◽  
Deniz Başkent
Author(s):  
Martin Chavant ◽  
Alexis Hervais-Adelman ◽  
Olivier Macherey

Purpose An increasing number of individuals with residual or even normal contralateral hearing are being considered for cochlear implantation. It remains unknown whether the presence of contralateral hearing is beneficial or detrimental to their perceptual learning of cochlear implant (CI)–processed speech. The aim of this experiment was to provide a first insight into this question using acoustic simulations of CI processing. Method Sixty normal-hearing listeners took part in an auditory perceptual learning experiment. Each subject was randomly assigned to one of three groups of 20 referred to as NORMAL, LOWPASS, and NOTHING. The experiment consisted of two test phases separated by a training phase. In the test phases, all subjects were tested on recognition of monosyllabic words passed through a six-channel “PSHC” vocoder presented to a single ear. In the training phase, which consisted of listening to a 25-min audio book, all subjects were also presented with the same vocoded speech in one ear but the signal they received in their other ear differed across groups. The NORMAL group was presented with the unprocessed speech signal, the LOWPASS group with a low-pass filtered version of the speech signal, and the NOTHING group with no sound at all. Results The improvement in speech scores following training was significantly smaller for the NORMAL than for the LOWPASS and NOTHING groups. Conclusions This study suggests that the presentation of normal speech in the contralateral ear reduces or slows down perceptual learning of vocoded speech but that an unintelligible low-pass filtered contralateral signal does not have this effect. Potential implications for the rehabilitation of CI patients with partial or full contralateral hearing are discussed.


2020 ◽  
Vol 24 ◽  
pp. 233121652097563
Author(s):  
Christopher F. Hauth ◽  
Simon C. Berning ◽  
Birger Kollmeier ◽  
Thomas Brand

The equalization cancellation model is often used to predict the binaural masking level difference. Previously its application to speech in noise has required separate knowledge about the speech and noise signals to maximize the signal-to-noise ratio (SNR). Here, a novel, blind equalization cancellation model is introduced that can use the mixed signals. This approach does not require any assumptions about particular sound source directions. It uses different strategies for positive and negative SNRs, with the switching between the two steered by a blind decision stage utilizing modulation cues. The output of the model is a single-channel signal with enhanced SNR, which we analyzed using the speech intelligibility index to compare speech intelligibility predictions. In a first experiment, the model was tested on experimental data obtained in a scenario with spatially separated target and masker signals. Predicted speech recognition thresholds were in good agreement with measured speech recognition thresholds with a root mean square error less than 1 dB. A second experiment investigated signals at positive SNRs, which was achieved using time compressed and low-pass filtered speech. The results demonstrated that binaural unmasking of speech occurs at positive SNRs and that the modulation-based switching strategy can predict the experimental results.


2017 ◽  
Vol 28 (09) ◽  
pp. 823-837 ◽  
Author(s):  
Marc A. Brennan ◽  
Dawna Lewis ◽  
Ryan McCreery ◽  
Judy Kopun ◽  
Joshua M. Alexander

AbstractNonlinear frequency compression (NFC) can improve the audibility of high-frequency sounds by lowering them to a frequency where audibility is better; however, this lowering results in spectral distortion. Consequently, performance is a combination of the effects of increased access to high-frequency sounds and the detrimental effects of spectral distortion. Previous work has demonstrated positive benefits of NFC on speech recognition when NFC is set to improve audibility while minimizing distortion. However, the extent to which NFC impacts listening effort is not well understood, especially for children with sensorineural hearing loss (SNHL).To examine the impact of NFC on recognition and listening effort for speech in adults and children with SNHL.Within-subject, quasi-experimental study. Participants listened to amplified nonsense words that were (1) frequency-lowered using NFC, (2) low-pass filtered at 5 kHz to simulate the restricted bandwidth (RBW) of conventional hearing aid processing, or (3) low-pass filtered at 10 kHz to simulate extended bandwidth (EBW) amplification.Fourteen children (8–16 yr) and 14 adults (19–65 yr) with mild-to-severe SNHL.Participants listened to speech processed by a hearing aid simulator that amplified input signals to fit a prescriptive target fitting procedure.Participants were blinded to the type of processing. Participants' responses to each nonsense word were analyzed for accuracy and verbal-response time (VRT; listening effort). A multivariate analysis of variance and linear mixed model were used to determine the effect of hearing-aid signal processing on nonsense word recognition and VRT.Both children and adults identified the nonsense words and initial consonants better with EBW and NFC than with RBW. The type of processing did not affect the identification of the vowels or final consonants. There was no effect of age on recognition of the nonsense words, initial consonants, medial vowels, or final consonants. VRT did not change significantly with the type of processing or age.Both adults and children demonstrated improved speech recognition with access to the high-frequency sounds in speech. Listening effort as measured by VRT was not affected by access to high-frequency sounds.


1994 ◽  
Vol 78 (1) ◽  
pp. 348-350 ◽  
Author(s):  
Donald Fucci ◽  
Steve Domyan ◽  
Lee Ellis ◽  
Daniel Harris

17 subjects provided magnitude estimations in the form of quality judgments of a filtered speech stimulus which was a nonsense sentence containing all of the consonants of English from Fairbanks. It was presented to subjects at 8 high-pass and 8 low-pass filtering conditions. Consistent magnitude estimations to the filtered stimulus were similar for both conditions. Also, for both conditions, subjects' numerical responses consistently increased in value as stimulus quality was judged to be poorer.


2019 ◽  
Vol 28 (3S) ◽  
pp. 756-761 ◽  
Author(s):  
Fatima Tangkhpanya ◽  
Morgane Le Carrour ◽  
Félicia Doucet ◽  
Jean-Pierre Gagné

Speech processing is more effortful under difficult listening conditions. Using a dual-task paradigm, it has been shown that older adults deploy more listening effort than younger adults when performing a speech recognition task in noise. Purpose The primary purpose of this study was to investigate whether a dual-task paradigm could be used to investigate differences in listening effort for an audiovisual speech comprehension task. If so, it was predicted that older adults would expend more listening effort than younger adults. Method Three groups of participants took part in the investigation: (a) young normal-hearing adults, (b) young normal-hearing adults listening to the speech material low-pass filtered above 3 kHz, and (c) older adults with age-related normal hearing sensitivity or better. A dual-task paradigm was used to measure listening effort. The primary task consisted of comprehending a short documentary presented at 63 dBA in a background noise that consisted of a 4-talker speech babble presented at 69 dBA. The participants had to answer a set of 15 questions related to the content of the documentary. The secondary task was a tactile detection task presented at a random time interval, over a 12-min period (approximately 8 stimuli/min). Each task was performed separately and concurrently. Results The younger participants who performed the listening task under the low-pass filtered condition displayed significantly more listening effort than the 2 other groups of participants. Conclusion First, the study confirmed that the dual-task paradigm used in this study was sufficiently sensitive to reveal significant differences in listening effort for a speech comprehension task across 3 groups of participants. Contrary to our prediction, it was the group of young normal-hearing participants who listened to the documentaries under the low-pass filtered condition that displayed significantly more listening effort than the other 2 groups of listeners.


1981 ◽  
Vol 90 (6) ◽  
pp. 543-545 ◽  
Author(s):  
Kimberly Hoffman-Lawless ◽  
Robert W. Keith ◽  
Robin T. Cotton

The present study was designed to determine whether auditory processing disorders are present in children with documented middle ear effusion (MEE) that required surgical treatment. Children with previous MEE and control subjects, in two age groups, were studied using five tests of auditory processing abilities: low-pass filtered speech, staggered spondaic word test, speech in noise, auditory sequential memory, and sound blending. Results found differences in groups at mean age 7 on the filtered speech test, but no statistical differences were found on any other test at age 7 or on any test at mean age 9. The results indicate that well-managed MEE appears to have no long-term effects on children acquiring this disease.


2015 ◽  
Vol 58 (3) ◽  
pp. 590-600 ◽  
Author(s):  
Maria V. Kondaurova ◽  
Tonya R. Bergeson ◽  
Huiping Xu ◽  
Christine Kitamura

Purpose The affective properties of infant-directed speech influence the attention of infants with normal hearing to speech sounds. This study explored the affective quality of maternal speech to infants with hearing impairment (HI) during the 1st year after cochlear implantation as compared to speech to infants with normal hearing. Method Mothers of infants with HI and mothers of infants with normal hearing matched by age (NH-AM) or hearing experience (NH-EM) were recorded playing with their infants during 3 sessions over a 12-month period. Speech samples of 25 s were low-pass filtered, leaving intonation but not speech information intact. Sixty adults rated the stimuli along 5 scales: positive/negative affect and intention to express affection, to encourage attention, to comfort/soothe, and to direct behavior. Results Low-pass filtered speech to HI and NH-EM groups was rated as more positive, affective, and comforting compared with the such speech to the NH-AM group. Speech to infants with HI and with NH-AM was rated as more directive than speech to the NH-EM group. Mothers decreased affective qualities in speech to all infants but increased directive qualities in speech to infants with NH-EM over time. Conclusions Mothers fine-tune communicative intent in speech to their infant's developmental stage. They adjust affective qualities to infants' hearing experience rather than to chronological age but adjust directive qualities of speech to the chronological age of their infants.


Sign in / Sign up

Export Citation Format

Share Document