Listening Effort in Native and Nonnative English-Speaking Children Using Low Linguistic Single- and Dual-Task Paradigms

2020 ◽  
Vol 63 (6) ◽  
pp. 1979-1989
Author(s):  
Ilze Oosthuizen ◽  
Erin M. Picou ◽  
Lidia Pottas ◽  
Hermanus Carel Myburgh ◽  
De Wet Swanepoel

Purpose It is not clear if behavioral indices of listening effort are sensitive to changes in signal-to-noise ratio (SNR) for young children (7–12 years old) from multilingual backgrounds. The purpose of this study was to explore the effects of SNR on listening effort in multilingual school-aged children (native English, nonnative English) as measured with a single- and a dual-task paradigm with low-linguistic speech stimuli (digits). The study also aimed to explore age effects on digit triplet recognition and response times (RTs). Method Sixty children with normal hearing participated, 30 per language group. Participants completed single and dual tasks in three SNRs (quiet, −10 dB, and −15 dB). Speech stimuli for both tasks were digit triplets. Verbal RTs were the listening effort measure during the single-task paradigm. A visual monitoring task was the secondary task during the dual-task paradigm. Results Significant effects of SNR on RTs were evident during both single- and dual-task paradigms. As expected, language background did not affect the pattern of RTs. The data also demonstrate a maturation effect for triplet recognition during both tasks and for RTs during the dual-task only. Conclusions Both single- and dual-task paradigms were sensitive to changes in SNR for school-aged children between 7 and 12 years of age. Language background (English as native language vs. English as nonnative language) had no significant effect on triplet recognition or RTs, demonstrating practical utility of low-linguistic stimuli for testing children from multilingual backgrounds.

2021 ◽  
Vol 25 ◽  
pp. 233121652098470
Author(s):  
Ilze Oosthuizen ◽  
Erin M. Picou ◽  
Lidia Pottas ◽  
Hermanus C. Myburgh ◽  
De Wet Swanepoel

Technology options for children with limited hearing unilaterally that improve the signal-to-noise ratio are expected to improve speech recognition and also reduce listening effort in challenging listening situations, although previous studies have not confirmed this. Employing behavioral and subjective indices of listening effort, this study aimed to evaluate the effects of two intervention options, remote microphone system (RMS) and contralateral routing of signal (CROS) system, in school-aged children with limited hearing unilaterally. Nineteen children (aged 7–12 years) with limited hearing unilaterally completed a digit triplet recognition task in three loudspeaker conditions: midline, monaural direct, and monaural indirect with three intervention options: unaided, RMS, and CROS system. Verbal response times were interpreted as a behavioral measure of listening effort. Participants provided subjective ratings immediately following behavioral measures. The RMS significantly improved digit triplet recognition across loudspeaker conditions and reduced verbal response times in the midline and indirect conditions. The CROS system improved speech recognition and listening effort only in the indirect condition. Subjective ratings analyses revealed that significantly more participants indicated that the remote microphone made it easier for them to listen and to stay motivated. Behavioral and subjective indices of listening effort indicated that an RMS provided the most consistent benefit for speech recognition and listening effort for children with limited unilateral hearing. RMSs could therefore be a beneficial technology option in classrooms for children with limited hearing unilaterally.


2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Sofie Degeest ◽  
Katrien Kestens ◽  
Hannah Keppler

2017 ◽  
Vol 21 ◽  
pp. 233121651668728 ◽  
Author(s):  
Jean-Pierre Gagné ◽  
Jana Besser ◽  
Ulrike Lemke

2019 ◽  
Author(s):  
Violet Aurora Brown ◽  
Julia Feld Strand

It is widely accepted that seeing a talker improves a listener’s ability to understand what a talker is saying in background noise (e.g., Erber, 1969; Sumby & Pollack, 1954). The literature is mixed, however, regarding the influence of the visual modality on the listening effort required to recognize speech (e.g., Fraser, Gagné, Alepins, & Dubois, 2010; Sommers & Phelps, 2016). Here, we present data showing that even when the visual modality robustly benefits recognition, processing audiovisual speech can still result in greater cognitive load than processing speech in the auditory modality alone. We show using a dual-task paradigm that the costs associated with audiovisual speech processing are more pronounced in easy listening conditions, in which speech can be recognized at high rates in the auditory modality alone—indeed, effort did not differ between audiovisual and audio-only conditions when the background noise was presented at a more difficult level. Further, we show that though these effects replicate with different stimuli and participants, they do not emerge when effort is assessed with a recall paradigm rather than a dual-task paradigm. Together, these results suggest that the widely cited audiovisual recognition benefit may come at a cost under more favorable listening conditions, and add to the growing body of research suggesting that various measures of effort may not be tapping into the same underlying construct (Strand et al., 2018).


2020 ◽  
Vol 73 (9) ◽  
pp. 1431-1443 ◽  
Author(s):  
Violet A Brown ◽  
Drew J McLaughlin ◽  
Julia F Strand ◽  
Kristin J Van Engen

In noisy settings or when listening to an unfamiliar talker or accent, it can be difficult to understand spoken language. This difficulty typically results in reductions in speech intelligibility, but may also increase the effort necessary to process the speech even when intelligibility is unaffected. In this study, we used a dual-task paradigm and pupillometry to assess the cognitive costs associated with processing fully intelligible accented speech, predicting that rapid perceptual adaptation to an accent would result in decreased listening effort over time. The behavioural and physiological paradigms provided converging evidence that listeners expend greater effort when processing nonnative- relative to native-accented speech, and both experiments also revealed an overall reduction in listening effort over the course of the experiment. Only the pupillometry experiment, however, revealed greater adaptation to nonnative- relative to native-accented speech. An exploratory analysis of the dual-task data that attempted to minimise practice effects revealed weak evidence for greater adaptation to the nonnative accent. These results suggest that even when speech is fully intelligible, resolving deviations between the acoustic input and stored lexical representations incurs a processing cost, and adaptation may attenuate this cost.


2019 ◽  
Vol 28 (3S) ◽  
pp. 756-761 ◽  
Author(s):  
Fatima Tangkhpanya ◽  
Morgane Le Carrour ◽  
Félicia Doucet ◽  
Jean-Pierre Gagné

Speech processing is more effortful under difficult listening conditions. Using a dual-task paradigm, it has been shown that older adults deploy more listening effort than younger adults when performing a speech recognition task in noise. Purpose The primary purpose of this study was to investigate whether a dual-task paradigm could be used to investigate differences in listening effort for an audiovisual speech comprehension task. If so, it was predicted that older adults would expend more listening effort than younger adults. Method Three groups of participants took part in the investigation: (a) young normal-hearing adults, (b) young normal-hearing adults listening to the speech material low-pass filtered above 3 kHz, and (c) older adults with age-related normal hearing sensitivity or better. A dual-task paradigm was used to measure listening effort. The primary task consisted of comprehending a short documentary presented at 63 dBA in a background noise that consisted of a 4-talker speech babble presented at 69 dBA. The participants had to answer a set of 15 questions related to the content of the documentary. The secondary task was a tactile detection task presented at a random time interval, over a 12-min period (approximately 8 stimuli/min). Each task was performed separately and concurrently. Results The younger participants who performed the listening task under the low-pass filtered condition displayed significantly more listening effort than the 2 other groups of participants. Conclusion First, the study confirmed that the dual-task paradigm used in this study was sufficiently sensitive to reveal significant differences in listening effort for a speech comprehension task across 3 groups of participants. Contrary to our prediction, it was the group of young normal-hearing participants who listened to the documentaries under the low-pass filtered condition that displayed significantly more listening effort than the other 2 groups of listeners.


2019 ◽  
Author(s):  
Violet Aurora Brown ◽  
Drew Jordan McLaughlin ◽  
Julia Feld Strand ◽  
Kristin J. Van Engen

In noisy settings or when listening to an unfamiliar talker or accent, it may be difficult to recognize spoken language. This difficulty typically results in reductions in speech intelligibility, but may also increase the effort necessary to process the speech. In the current study, we used a dual-task paradigm and pupillometry to assess the cognitive costs associated with processing fully intelligible accented speech, predicting that rapid perceptual adaptation to an accent would result in decreased listening effort over time. Both paradigms revealed greater effort for nonnative- relative to native-accented speech, as well as an overall reduction in listening effort over the course of the experiment, but only the pupillometry experiment revealed greater adaptation to nonnative- relative to native-accented speech. An exploratory analysis of the dual-task data that attempted to minimize practice effects, however, revealed weak evidence for greater adaptation to the nonnative accent. These results suggest that even when speech is fully intelligible, resolving deviations between the acoustic input and stored lexical representations incurs a processing cost, and adaptation may attenuate this cost.


Sign in / Sign up

Export Citation Format

Share Document