scholarly journals How Reliable are 11- to 13-Year-Olds’ Self-Ratings of Effort in Noisy Conditions?

2021 ◽  
Vol 7 ◽  
Author(s):  
Chiara Visentin ◽  
Nicola Prodi

Performing a task in noisy conditions is effortful. This is especially relevant for children in classrooms as the effort involved could impair their learning and academic achievements. Numerous studies have investigated how to use behavioral and physiological methods to measure effort, but limited data are available on how well school-aged children rate effort in their classrooms. This study examines whether and how self-ratings can be used to describe the effort children perceive while working in a noisy classroom. This is done by assessing the effect of listening condition on self-rated effort in a group of 182 children 11–13 years old. The children performed three tasks typical of daily classroom activities (speech perception, sentence comprehension, and mental calculation) in three listening conditions (quiet, traffic noise, and classroom noise). After completing each task, they rated their perceived task-related effort on a five-point scale. Their task accuracy and response times (RTs) were recorded (the latter as a behavioral measure of task-related effort). Participants scored higher (more effort) on their self-ratings in the noisy conditions than in quiet. Their self-ratings were also sensitive to the type of background noise, but only for the speech perception task, suggesting that children might not be fully aware of the disruptive effect of background noise. A repeated-measures correlation analysis was run to explore the possible relationship between the three study outcomes (accuracy, self-ratings, and RTs). Self-ratings correlated with accuracy (in all tasks) and with RTs (only in the speech perception task), suggesting that the relationship between different measures of listening effort might depend on the task. Overall, the present findings indicate that self-reports could be useful for measuring changes in school-aged children’s perceived listening effort. More research is needed to better understand, and consequently manage, the individual factors that might affect children’s self-ratings (e.g., motivation) and to devise an appropriate response format.

2018 ◽  
Vol 29 (09) ◽  
pp. 802-813 ◽  
Author(s):  
Allison Biever ◽  
Jan Gilden ◽  
Teresa Zwolan ◽  
Megan Mears ◽  
Anne Beiter

AbstractThe Nucleus® 6 sound processor is now compatible with the Nucleus® 22 (CI22M)—Cochlear’s first generation cochlear implant. The Nucleus 6 offers three new signal processing algorithms that purportedly facilitate improved hearing in background noise.These studies were designed to evaluate listening performance and user satisfaction with the Nucleus 6 sound processor.The research design was a prospective, single-participant, repeated measures designA group of 80 participants implanted with various Nucleus internal implant devices (CI22M, CI24M, Freedom® CI24RE, CI422, and CI512) were recruited from a total of six North American sites.Participants had their external sound processor upgraded to the Nucleus 6 sound processor. Final speech perception testing in noise and subjective questionnaires were completed after four or 12 weeks of take-home use with the Nucleus 6.Speech perception testing in noise showed significant improvement and participants reported increased satisfaction with the Nucleus 6.These studies demonstrated the benefit of the new algorithms in the Nucleus 6 over previous generations of sound processors.


2011 ◽  
Vol 22 (09) ◽  
pp. 623-632 ◽  
Author(s):  
René H. Gifford ◽  
Amy P. Olund ◽  
Melissa DeJong

Background: Current cochlear implant recipients are achieving increasingly higher levels of speech recognition; however, the presence of background noise continues to significantly degrade speech understanding for even the best performers. Newer generation Nucleus cochlear implant sound processors can be programmed with SmartSound strategies that have been shown to improve speech understanding in noise for adult cochlear implant recipients. The applicability of these strategies for use in children, however, is not fully understood nor widely accepted. Purpose: To assess speech perception for pediatric cochlear implant recipients in the presence of a realistic restaurant simulation generated by an eight-loudspeaker (R-SPACE™) array in order to determine whether Nucleus sound processor SmartSound strategies yield improved sentence recognition in noise for children who learn language through the implant. Research Design: Single subject, repeated measures design. Study Sample: Twenty-two experimental subjects with cochlear implants (mean age 11.1 yr) and 25 control subjects with normal hearing (mean age 9.6 yr) participated in this prospective study. Intervention: Speech reception thresholds (SRT) in semidiffuse restaurant noise originating from an eight-loudspeaker array were assessed with the experimental subjects’ everyday program incorporating Adaptive Dynamic Range Optimization (ADRO) as well as with the addition of Autosensitivity control (ASC). Data Collection and Analysis: Adaptive SRTs with the Hearing In Noise Test (HINT) sentences were obtained for all 22 experimental subjects, and performance—in percent correct—was assessed in a fixed +6 dB SNR (signal-to-noise ratio) for a six-subject subset. Statistical analysis using a repeated-measures analysis of variance (ANOVA) evaluated the effects of the SmartSound setting on the SRT in noise. Results: The primary findings mirrored those reported previously with adult cochlear implant recipients in that the addition of ASC to ADRO significantly improved speech recognition in noise for pediatric cochlear implant recipients. The mean degree of improvement in the SRT with the addition of ASC to ADRO was 3.5 dB for a mean SRT of 10.9 dB SNR. Thus, despite the fact that these children have acquired auditory/oral speech and language through the use of their cochlear implant(s) equipped with ADRO, the addition of ASC significantly improved their ability to recognize speech in high levels of diffuse background noise. The mean SRT for the control subjects with normal hearing was 0.0 dB SNR. Given that the mean SRT for the experimental group was 10.9 dB SNR, despite the improvements in performance observed with the addition of ASC, cochlear implants still do not completely overcome the speech perception deficit encountered in noisy environments accompanying the diagnosis of severe-to-profound hearing loss. Conclusion: SmartSound strategies currently available in latest generation Nucleus cochlear implant sound processors are able to significantly improve speech understanding in a realistic, semidiffuse noise for pediatric cochlear implant recipients. Despite the reluctance of pediatric audiologists to utilize SmartSound settings for regular use, the results of the current study support the addition of ASC to ADRO for everyday listening environments to improve speech perception in a child's typical everyday program.


2018 ◽  
Vol 29 (09) ◽  
pp. 814-825 ◽  
Author(s):  
Patti M. Johnstone ◽  
Kristen E. T. Mills ◽  
Elizabeth Humphrey ◽  
Kelly R. Yeager ◽  
Emily Jones ◽  
...  

AbstractCochlear implant (CI) users are affected more than their normal hearing (NH) peers by the negative consequences of background noise on speech understanding. Research has shown that adult CI users can improve their speech recognition in challenging listening environments by using dual-microphone beamformers, such as adaptive directional microphones (ADMs) and wireless remote microphones (RMs). The suitability of these microphone technologies for use in children with CIs is not well-understood nor widely accepted.To assess the benefit of ADM or RM technology on speech perception in background noise in children and adolescents with cochlear implants (CIs) with no previous or current use of ADM or RM.Mixed, repeated measures design.Twenty (20) children, ten (10) CI users (mean age 14.3 yrs) who used Advanced Bionics HiRes90K implants with research Naida processors, and ten (10) NH age-matched controls participated in this prospective study.CI users listened with an ear-canal level microphone, T-Mic (TM), an ADM, and a wireless RM at different audio-mixing ratios. Speech understanding with five microphone settings (TM 100%, ADM, RM + TM 50/50, RM + TM 75/25, RM 100%) was evaluated in quiet and in noise.Speech perception ability was measured using children’s spondee words to obtain a speech recognition threshold for 80% accuracy (SRT80%) in 20-talker babble where the listener sat in a sound booth 1 m (3.28′) from the target speech (front) and noise (behind) to test five microphone settings (TM 100%, ADM, RM + TM 50/50, RM + TM 75/25, RM 100%). Group performance-intensity functions were computed for each listening condition to show the effects of microphone configuration with respect to signal-to-noise ratio (SNR). A difference score (CI Group minus NH Group) was computed to show the effect of microphone technology at different SNRs relative to NH. Statistical analysis using a repeated-measures analysis of variance evaluated the effects of the microphone configurations on SRT80% and performance at SNRs. Between-groups analysis of variance was used to compare the CI group with the NH group.The speech recognition was significantly poorer for children with CI than children with NH in quiet and in noise when using the TM alone. Adding the ADM or RM provided a significant improvement in speech recognition for the CI group over use of the TM alone in noise (mean dB advantage ranged from 5.8 for ADM to 16 for RM100). When children with CI used the RM75 or RM100 in background babble, speech recognition was not statistically different from the group with NH.Speech recognition in noise performance improved with the use of ADM and RM100 or RM75 over TM-only for children with CIs. Alhough children with CI remain at a disadvantage as compared with NH children in quiet and more favorable SNRs, microphone technology can enhance performance for some children with CI to match that of NH peers in contexts with negative SNRs.


2021 ◽  
Vol 25 ◽  
pp. 233121652098470
Author(s):  
Ilze Oosthuizen ◽  
Erin M. Picou ◽  
Lidia Pottas ◽  
Hermanus C. Myburgh ◽  
De Wet Swanepoel

Technology options for children with limited hearing unilaterally that improve the signal-to-noise ratio are expected to improve speech recognition and also reduce listening effort in challenging listening situations, although previous studies have not confirmed this. Employing behavioral and subjective indices of listening effort, this study aimed to evaluate the effects of two intervention options, remote microphone system (RMS) and contralateral routing of signal (CROS) system, in school-aged children with limited hearing unilaterally. Nineteen children (aged 7–12 years) with limited hearing unilaterally completed a digit triplet recognition task in three loudspeaker conditions: midline, monaural direct, and monaural indirect with three intervention options: unaided, RMS, and CROS system. Verbal response times were interpreted as a behavioral measure of listening effort. Participants provided subjective ratings immediately following behavioral measures. The RMS significantly improved digit triplet recognition across loudspeaker conditions and reduced verbal response times in the midline and indirect conditions. The CROS system improved speech recognition and listening effort only in the indirect condition. Subjective ratings analyses revealed that significantly more participants indicated that the remote microphone made it easier for them to listen and to stay motivated. Behavioral and subjective indices of listening effort indicated that an RMS provided the most consistent benefit for speech recognition and listening effort for children with limited unilateral hearing. RMSs could therefore be a beneficial technology option in classrooms for children with limited hearing unilaterally.


2018 ◽  
Vol 25 (1) ◽  
pp. 35-42 ◽  
Author(s):  
Alice Lam ◽  
Murray Hodgson ◽  
Nicola Prodi ◽  
Chiara Visentin

This study evaluates the speech reception performance of native (L1) and non-native (L2) normal-hearing young adults in acoustical conditions containing varying amounts of reverberation and background noise. Two metrics were used and compared: the intelligibility score and the response time, taken as a behavioral measure of listening effort. Listening tests were conducted in auralized acoustical environments with L1 and L2 English-speaking university students. It was found that even though the two groups achieved the same, close to the maximum accuracy, L2 participants manifested longer response times in every acoustical condition, suggesting an increased involvement of cognitive resources in the speech reception process.


2015 ◽  
Vol 26 (02) ◽  
pp. 145-154 ◽  
Author(s):  
Sterling W. Sheffield ◽  
Kelly Jahn ◽  
René H. Gifford

Background: With improved surgical techniques and electrode design, an increasing number of cochlear implant (CI) recipients have preserved acoustic hearing in the implanted ear, thereby resulting in bilateral acoustic hearing. There are currently no guidelines, however, for clinicians with respect to audiometric criteria and the recommendation of amplification in the implanted ear. The acoustic bandwidth necessary to obtain speech perception benefit from acoustic hearing in the implanted ear is unknown. Additionally, it is important to determine if, and in which listening environments, acoustic hearing in both ears provides more benefit than hearing in just one ear, even with limited residual hearing. Purpose: The purposes of this study were to (1) determine whether acoustic hearing in an ear with a CI provides as much speech perception benefit as an equivalent bandwidth of acoustic hearing in the nonimplanted ear, and (2) determine whether acoustic hearing in both ears provides more benefit than hearing in just one ear. Research Design: A repeated-measures, within-participant design was used to compare performance across listening conditions. Study Sample: Seven adults with CIs and bilateral residual acoustic hearing (hearing preservation) were recruited for the study. Data Collection and Analysis: Consonant-nucleus-consonant word recognition was tested in four conditions: CI alone, CI + acoustic hearing in the nonimplanted ear, CI + acoustic hearing in the implanted ear, and CI + bilateral acoustic hearing. A series of low-pass filters were used to examine the effects of acoustic bandwidth through an insert earphone with amplification. Benefit was defined as the difference among conditions. The benefit of bilateral acoustic hearing was tested in both diffuse and single-source background noise. Results were analyzed using repeated-measures analysis of variance. Results: Similar benefit was obtained for equivalent acoustic frequency bandwidth in either ear. Acoustic hearing in the nonimplanted ear provided more benefit than the implanted ear only in the wideband condition, most likely because of better audiometric thresholds (>500 Hz) in the nonimplanted ear. Bilateral acoustic hearing provided more benefit than unilateral hearing in either ear alone, but only in diffuse background noise. Conclusions: Results support use of amplification in the implanted ear if residual hearing is present. The benefit of bilateral acoustic hearing (hearing preservation) should not be tested in quiet or with spatially coincident speech and noise, but rather in spatially separated speech and noise (e.g., diffuse background noise).


2016 ◽  
Vol 27 (02) ◽  
pp. 085-102 ◽  
Author(s):  
Bernadette Rakszawski ◽  
Rose Wright ◽  
Jamie H. Cadieux ◽  
Lisa S. Davidson ◽  
Christine Brenner

Background: Cochlear implants (CIs) have been shown to improve children’s speech recognition over traditional amplification when severe-to-profound sensorineural hearing loss is present. Despite improvements, understanding speech at low-level intensities or in the presence of background noise remains difficult. In an effort to improve speech understanding in challenging environments, Cochlear Ltd. offers preprocessing strategies that apply various algorithms before mapping the signal to the internal array. Two of these strategies include Autosensitivity Control™ (ASC) and Adaptive Dynamic Range Optimization (ADRO®). Based on the previous research, the manufacturer’s default preprocessing strategy for pediatrics’ everyday programs combines ASC + ADRO®. Purpose: The purpose of this study is to compare pediatric speech perception performance across various preprocessing strategies while applying a specific programming protocol using increased threshold levels to ensure access to very low-level sounds. Research Design: This was a prospective, cross-sectional, observational study. Participants completed speech perception tasks in four preprocessing conditions: no preprocessing, ADRO®, ASC, and ASC + ADRO®. Study Sample: Eleven pediatric Cochlear Ltd. CI users were recruited: six bilateral, one unilateral, and four bimodal. Intervention: Four programs, with the participants’ everyday map, were loaded into the processor with different preprocessing strategies applied in each of the four programs: no preprocessing, ADRO®, ASC, and ASC + ADRO®. Data Collection and Analysis: Participants repeated consonant–nucleus–consonant (CNC) words presented at 50 and 70 dB SPL in quiet and Hearing in Noise Test (HINT) sentences presented adaptively with competing R-SpaceTM noise at 60 and 70 dB SPL. Each measure was completed as participants listened with each of the four preprocessing strategies listed above. Test order and conditions were randomized. A repeated-measures analysis of was used to compare each preprocessing strategy for the group. Critical differences were used to determine significant score differences between each preprocessing strategy for individual participants. Results: For CNC words presented at 50 dB SPL, the group data revealed significantly better scores using ASC + ADRO® compared to all other preprocessing conditions while ASC resulted in poorer scores compared to ADRO® and ASC + ADRO®. Group data for HINT sentences presented in 70 dB SPL of R-SpaceTM noise revealed significantly improved scores using ASC and ASC + ADRO® compared to no preprocessing, with ASC + ADRO® scores being better than ADRO® alone scores. Group data for CNC words presented at 70 dB SPL and adaptive HINT sentences presented in 60 dB SPL of R-SpaceTM noise showed no significant difference among conditions. Individual data showed that the preprocessing strategy yielding the best scores varied across measures and participants. Conclusions: Group data reveal an advantage with ASC + ADRO® for speech perception presented at lower levels and in higher levels of background noise. Individual data revealed that the optimal preprocessing strategy varied among participants, indicating that a variety of preprocessing strategies should be explored for each CI user considering his or her performance in challenging listening environments.


2021 ◽  
Vol 25 ◽  
pp. 233121652110144
Author(s):  
Ilja Reinten ◽  
Inge De Ronde-Brons ◽  
Rolph Houben ◽  
Wouter Dreschler

Single microphone noise reduction (NR) in hearing aids can provide a subjective benefit even when there is no objective improvement in speech intelligibility. A possible explanation lies in a reduction of listening effort. Previously, we showed that response times (a proxy for listening effort) to an auditory-only dual-task were reduced by NR in normal-hearing (NH) listeners. In this study, we investigate if the results from NH listeners extend to the hearing-impaired (HI), the target group for hearing aids. In addition, we assess the relevance of the outcome measure for studying and understanding listening effort. Twelve HI subjects were asked to sum two digits of a digit triplet in noise. We measured response times to this task, as well as subjective listening effort and speech intelligibility. Stimuli were presented at three signal-to-noise ratios (SNR; –5, 0, +5 dB) and in quiet. Stimuli were processed with ideal or nonideal NR, or unprocessed. The effect of NR on response times in HI listeners was significant only in conditions where speech intelligibility was also affected (–5 dB SNR). This is in contrast to the previous results with NH listeners. There was a significant effect of SNR on response times for HI listeners. The response time measure was reasonably correlated ( R142 = 0.54) to subjective listening effort and showed a sufficient test–retest reliability. This study thus presents an objective, valid, and reliable measure for evaluating an aspect of listening effort of HI listeners.


2020 ◽  
Vol 32 (6) ◽  
pp. 1092-1103 ◽  
Author(s):  
Dan Kennedy-Higgins ◽  
Joseph T. Devlin ◽  
Helen E. Nuttall ◽  
Patti Adank

Successful perception of speech in everyday listening conditions requires effective listening strategies to overcome common acoustic distortions, such as background noise. Convergent evidence from neuroimaging and clinical studies identify activation within the temporal lobes as key to successful speech perception. However, current neurobiological models disagree on whether the left temporal lobe is sufficient for successful speech perception or whether bilateral processing is required. We addressed this issue using TMS to selectively disrupt processing in either the left or right superior temporal gyrus (STG) of healthy participants to test whether the left temporal lobe is sufficient or whether both left and right STG are essential. Participants repeated keywords from sentences presented in background noise in a speech reception threshold task while receiving online repetitive TMS separately to the left STG, right STG, or vertex or while receiving no TMS. Results show an equal drop in performance following application of TMS to either left or right STG during the task. A separate group of participants performed a visual discrimination threshold task to control for the confounding side effects of TMS. Results show no effect of TMS on the control task, supporting the notion that the results of Experiment 1 can be attributed to modulation of cortical functioning in STG rather than to side effects associated with online TMS. These results indicate that successful speech perception in everyday listening conditions requires both left and right STG and thus have ramifications for our understanding of the neural organization of spoken language processing.


Sign in / Sign up

Export Citation Format

Share Document