Speech intelligibility and listening effort in university classrooms for native and non-native Italian listeners

2019 ◽  
Vol 26 (4) ◽  
pp. 275-291
Author(s):  
Chiara Visentin ◽  
Nicola Prodi ◽  
Francesca Cappelletti ◽  
Simone Torresin ◽  
Andrea Gasparella

Listening effort describes the allocation of attentional and cognitive resources for successful listening. In adverse conditions, the mental demands for listening increase, interfering with other cognitive functions. This is especially relevant in learning spaces, where complex tasks that recruit more cognitive resources (e.g. memorization of information and comprehension) are performed by the students. This study focuses on the case of university classrooms and investigates the effects of different types of masking noise on both speech intelligibility and listening effort. Speech-in-noise tests in the Italian language were presented to 25 young adults with normal hearing (13 native and 12 non-native listeners) within an existing university classroom located in Bozen-Bolzano (Italy). The tests were presented in three listening conditions (quiet, stationary noise, and fluctuating noise), grouping the listeners around two locations within the classroom. The task performance was assessed using both speech intelligibility and two proxy measures of listening effort: response time and subjective ratings of effort. Longer response times and higher subjective ratings were taken to reflect increased listening effort. Results in noisy conditions were compared to the quiet condition. A disadvantage in task accuracy performance was found for non-native compared to native listeners; concerning response time, it was found that when the target signal is masked by a fluctuating noise, additional processing time is requested to non-native listeners compared to their native peers. The interaction was not pointed out by subjective ratings, supporting the hypothesis of a different sensitivity to listening conditions of the two proxy measures of listening effort.

2018 ◽  
Vol 25 (1) ◽  
pp. 35-42 ◽  
Author(s):  
Alice Lam ◽  
Murray Hodgson ◽  
Nicola Prodi ◽  
Chiara Visentin

This study evaluates the speech reception performance of native (L1) and non-native (L2) normal-hearing young adults in acoustical conditions containing varying amounts of reverberation and background noise. Two metrics were used and compared: the intelligibility score and the response time, taken as a behavioral measure of listening effort. Listening tests were conducted in auralized acoustical environments with L1 and L2 English-speaking university students. It was found that even though the two groups achieved the same, close to the maximum accuracy, L2 participants manifested longer response times in every acoustical condition, suggesting an increased involvement of cognitive resources in the speech reception process.


2021 ◽  
Author(s):  
Claire Guang ◽  
Emmett Lefkowitz ◽  
Naseem Dillman-Hasso ◽  
Violet Aurora Brown ◽  
Julia Feld Strand

Introduction: The presence of masking noise can impair speech intelligibility and increase the attentionaland cognitive resources necessary to understand speech. The first study to demonstrate the negative cognitive effects of noisy speech found that participants had poorer recall for aurally-presented digits early in a list when later digits were presented in noise relative to quiet (Rabbitt, 1968). However, despite being cited nearly 500 times and providing the foundation for a wealth of subsequent research on the topic, the original study has never been directly replicated.Methods: This study replicated Rabbitt (1968) with a large online sample and tested its robustness to a variety of analytical and scoring techniques.Results: We replicated Rabbitt’s key finding that listening to speech in noise impairs recall for items that came earlier in the list. The results were consistent when we used the original analytical technique (an ANOVA) and scoring method, the original analytical technique with a more lenient scoring method, and a more powerful analytical technique (generalized linear mixed effects models) that was not available when the original paper was published.Discussion: These findings support the claim that effortful listening can impair encoding or rehearsal of previously presented information.


2021 ◽  
Vol 25 ◽  
pp. 233121652110144
Author(s):  
Ilja Reinten ◽  
Inge De Ronde-Brons ◽  
Rolph Houben ◽  
Wouter Dreschler

Single microphone noise reduction (NR) in hearing aids can provide a subjective benefit even when there is no objective improvement in speech intelligibility. A possible explanation lies in a reduction of listening effort. Previously, we showed that response times (a proxy for listening effort) to an auditory-only dual-task were reduced by NR in normal-hearing (NH) listeners. In this study, we investigate if the results from NH listeners extend to the hearing-impaired (HI), the target group for hearing aids. In addition, we assess the relevance of the outcome measure for studying and understanding listening effort. Twelve HI subjects were asked to sum two digits of a digit triplet in noise. We measured response times to this task, as well as subjective listening effort and speech intelligibility. Stimuli were presented at three signal-to-noise ratios (SNR; –5, 0, +5 dB) and in quiet. Stimuli were processed with ideal or nonideal NR, or unprocessed. The effect of NR on response times in HI listeners was significant only in conditions where speech intelligibility was also affected (–5 dB SNR). This is in contrast to the previous results with NH listeners. There was a significant effect of SNR on response times for HI listeners. The response time measure was reasonably correlated ( R142 = 0.54) to subjective listening effort and showed a sufficient test–retest reliability. This study thus presents an objective, valid, and reliable measure for evaluating an aspect of listening effort of HI listeners.


2013 ◽  
Vol 56 (4) ◽  
pp. 1075-1084 ◽  
Author(s):  
Carina Pals ◽  
Anastasios Sarampalis ◽  
Deniz Başkent

Purpose Fitting a cochlear implant (CI) for optimal speech perception does not necessarily optimize listening effort. This study aimed to show that listening effort may change between CI processing conditions for which speech intelligibility remains constant. Method Nineteen normal-hearing participants listened to CI simulations with varying numbers of spectral channels. A dual-task paradigm combining an intelligibility task with either a linguistic or nonlinguistic visual response-time (RT) task measured intelligibility and listening effort. The simultaneously performed tasks compete for limited cognitive resources; changes in effort associated with the intelligibility task are reflected in changes in RT on the visual task. A separate self-report scale provided a subjective measure of listening effort. Results All measures showed significant improvements with increasing spectral resolution up to 6 channels. However, only the RT measure of listening effort continued improving up to 8 channels. The effects were stronger for RTs recorded during listening than for RTs recorded between listening. Conclusion The results suggest that listening effort decreases with increased spectral resolution. Moreover, these improvements are best reflected in objective measures of listening effort, such as RTs on a secondary task, rather than intelligibility scores or subjective effort measures.


2020 ◽  
Vol 31 (01) ◽  
pp. 017-029
Author(s):  
Paul Reinhart ◽  
Pavel Zahorik ◽  
Pamela Souza

AbstractDigital noise reduction (DNR) processing is used in hearing aids to enhance perception in noise by classifying and suppressing the noise acoustics. However, the efficacy of DNR processing is not known under reverberant conditions where the speech-in-noise acoustics are further degraded by reverberation.The purpose of this study was to investigate acoustic and perceptual effects of DNR processing across a range of reverberant conditions for individuals with hearing impairment.This study used an experimental design to investigate the effects of varying reverberation on speech-in-noise processed with DNR.Twenty-six listeners with mild-to-moderate sensorineural hearing impairment participated in the study.Speech stimuli were combined with unmodulated broadband noise at several signal-to-noise ratios (SNRs). A range of reverberant conditions with realistic parameters were simulated, as well as an anechoic control condition without reverberation. Reverberant speech-in-noise signals were processed using a spectral subtraction DNR simulation. Signals were acoustically analyzed using a phase inversion technique to quantify improvement in SNR as a result of DNR processing. Sentence intelligibility and subjective ratings of listening effort, speech naturalness, and background noise comfort were examined with and without DNR processing across the conditions.Improvement in SNR was greatest in the anechoic control condition and decreased as the ratio of direct to reverberant energy decreased. There was no significant effect of DNR processing on speech intelligibility in the anechoic control condition, but there was a significant decrease in speech intelligibility with DNR processing in all of the reverberant conditions. Subjectively, listeners reported greater listening effort and lower speech naturalness with DNR processing in some of the reverberant conditions. Listeners reported higher background noise comfort with DNR processing only in the anechoic control condition.Results suggest that reverberation affects DNR processing using a spectral subtraction algorithm in such a way that decreases the ability of DNR to reduce noise without distorting the speech acoustics. Overall, DNR processing may be most beneficial in environments with little reverberation and that the use of DNR processing in highly reverberant environments may actually produce adverse perceptual effects. Further research is warranted using commercial hearing aids in realistic reverberant environments.


2019 ◽  
Vol 62 (11) ◽  
pp. 4179-4195 ◽  
Author(s):  
Nicola Prodi ◽  
Chiara Visentin

Purpose This study examines the effects of reverberation and noise fluctuation on the response time (RT) to the auditory stimuli in a speech reception task. Method The speech reception task was presented to 76 young adults with normal hearing in 3 simulated listening conditions (1 anechoic, 2 reverberant). Speechlike stationary and fluctuating noise were used as maskers, in a wide range of signal-to-noise ratios. The speech-in-noise tests were presented in a closed-set format; data on speech intelligibility and RT (time elapsed from the offset of the auditory stimulus to the response selection) were collected. A slowing down in RTs was interpreted as an increase in listening effort. Results RTs slowed down in the more challenging signal-to-noise ratios, with increasing reverberation and for stationary compared to fluctuating noise, consistently with a fluctuating masking release scheme. When speech intelligibility was fixed, it was found that the estimated RTs were similar or faster for stationary compared to fluctuating noise, depending on the amount of reverberation. Conclusions The current findings add to the literature on listening effort for listeners with normal hearing by indicating that the addition of reverberation to fluctuating noise increases RT in a speech reception task. The results support the importance of integrating noise and reverberation to provide accurate predictors of real-world performance in clinical settings.


2021 ◽  
Vol 25 ◽  
pp. 233121652098470
Author(s):  
Ilze Oosthuizen ◽  
Erin M. Picou ◽  
Lidia Pottas ◽  
Hermanus C. Myburgh ◽  
De Wet Swanepoel

Technology options for children with limited hearing unilaterally that improve the signal-to-noise ratio are expected to improve speech recognition and also reduce listening effort in challenging listening situations, although previous studies have not confirmed this. Employing behavioral and subjective indices of listening effort, this study aimed to evaluate the effects of two intervention options, remote microphone system (RMS) and contralateral routing of signal (CROS) system, in school-aged children with limited hearing unilaterally. Nineteen children (aged 7–12 years) with limited hearing unilaterally completed a digit triplet recognition task in three loudspeaker conditions: midline, monaural direct, and monaural indirect with three intervention options: unaided, RMS, and CROS system. Verbal response times were interpreted as a behavioral measure of listening effort. Participants provided subjective ratings immediately following behavioral measures. The RMS significantly improved digit triplet recognition across loudspeaker conditions and reduced verbal response times in the midline and indirect conditions. The CROS system improved speech recognition and listening effort only in the indirect condition. Subjective ratings analyses revealed that significantly more participants indicated that the remote microphone made it easier for them to listen and to stay motivated. Behavioral and subjective indices of listening effort indicated that an RMS provided the most consistent benefit for speech recognition and listening effort for children with limited unilateral hearing. RMSs could therefore be a beneficial technology option in classrooms for children with limited hearing unilaterally.


2017 ◽  
Vol 21 ◽  
pp. 233121651771684 ◽  
Author(s):  
Maj van den Tillaart-Haverkate ◽  
Inge de Ronde-Brons ◽  
Wouter A. Dreschler ◽  
Rolph Houben

2018 ◽  
Vol 61 (6) ◽  
pp. 1497-1516 ◽  
Author(s):  
Chiara Visentin ◽  
Nicola Prodi

Purpose The primary aim of this study was to develop and examine the potentials of a new speech-in-noise test in discriminating the favorable listening conditions targeted in the acoustical design of communication spaces. The test is based on the recognition and recall of disyllabic word sequences. A secondary aim was to compare the test with current speech-in-noise tests, assessing its benefits and limitations. Method Young adults (19–40 years old), self-reporting normal hearing, were presented with the newly developed Words Sequence Test (WST; 16 participants, Experiment 1) and with a consonant confusion test and a sentence recognition test (Experiment 2, 36 participants randomly assigned to the 2 tests). Participants performing the WST were presented with word sequences of different lengths (from 2 up to 6 words). Two listening conditions were selected: (a) no noise and no reverberation, and (b) reverberant, steady-state noise (Speech Transmission Index: 0.47). The tests were presented in a closed-set format; data on the number of words correctly recognized (speech intelligibility, IS) and the response times (RTs) were collected (onset RT, single words' RT). Results It was found that a sequence composed of 4 disyllabic words ensured both the full recognition score in quiet conditions and a significant decrease in IS results when noise and reverberation degraded the speech signal. RTs increased with the worsening of the listening conditions and the number of words of the sequence. The greatest onset RT variation was found when using a sequence of 4 words. In the comparison with current speech-in-noise tests, it was found that the WST maximized the IS difference between the selected listening conditions as well as the RT increase. Conclusions Overall, the results suggest that the new speech-in-noise test has good potentials in discriminating conditions with near-ceiling accuracy. As compared with current speech-in-noise tests, it appears that the WST with a 4-word sequence allows for a finer mapping of the acoustical design target conditions of public spaces through accuracy and onset RT data.


2021 ◽  
Vol 25 ◽  
pp. 233121652110180
Author(s):  
Cynthia R. Hunter

A sequential dual-task design was used to assess the impacts of spoken sentence context and cognitive load on listening effort. Young adults with normal hearing listened to sentences masked by multitalker babble in which sentence-final words were either predictable or unpredictable. Each trial began with visual presentation of a short (low-load) or long (high-load) sequence of to-be-remembered digits. Words were identified more quickly and accurately in predictable than unpredictable sentence contexts. In addition, digits were recalled more quickly and accurately on trials on which the sentence was predictable, indicating reduced listening effort for predictable compared to unpredictable sentences. For word and digit recall response time but not for digit recall accuracy, the effect of predictability remained significant after exclusion of trials with incorrect word responses and was thus independent of speech intelligibility. In addition, under high cognitive load, words were identified more slowly and digits were recalled more slowly and less accurately than under low load. Participants’ working memory and vocabulary were not correlated with the sentence context benefit in either word recognition or digit recall. Results indicate that listening effort is reduced when sentences are predictable and that cognitive load affects the processing of spoken words in sentence contexts.


Sign in / Sign up

Export Citation Format

Share Document