Listening Effort Measured Using a Dual-task Paradigm in Adults With Different Amounts of Noise Exposure

2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Sofie Degeest ◽  
Katrien Kestens ◽  
Hannah Keppler
2017 ◽  
Vol 21 ◽  
pp. 233121651668728 ◽  
Author(s):  
Jean-Pierre Gagné ◽  
Jana Besser ◽  
Ulrike Lemke

2019 ◽  
Author(s):  
Violet Aurora Brown ◽  
Julia Feld Strand

It is widely accepted that seeing a talker improves a listener’s ability to understand what a talker is saying in background noise (e.g., Erber, 1969; Sumby & Pollack, 1954). The literature is mixed, however, regarding the influence of the visual modality on the listening effort required to recognize speech (e.g., Fraser, Gagné, Alepins, & Dubois, 2010; Sommers & Phelps, 2016). Here, we present data showing that even when the visual modality robustly benefits recognition, processing audiovisual speech can still result in greater cognitive load than processing speech in the auditory modality alone. We show using a dual-task paradigm that the costs associated with audiovisual speech processing are more pronounced in easy listening conditions, in which speech can be recognized at high rates in the auditory modality alone—indeed, effort did not differ between audiovisual and audio-only conditions when the background noise was presented at a more difficult level. Further, we show that though these effects replicate with different stimuli and participants, they do not emerge when effort is assessed with a recall paradigm rather than a dual-task paradigm. Together, these results suggest that the widely cited audiovisual recognition benefit may come at a cost under more favorable listening conditions, and add to the growing body of research suggesting that various measures of effort may not be tapping into the same underlying construct (Strand et al., 2018).


2020 ◽  
Vol 73 (9) ◽  
pp. 1431-1443 ◽  
Author(s):  
Violet A Brown ◽  
Drew J McLaughlin ◽  
Julia F Strand ◽  
Kristin J Van Engen

In noisy settings or when listening to an unfamiliar talker or accent, it can be difficult to understand spoken language. This difficulty typically results in reductions in speech intelligibility, but may also increase the effort necessary to process the speech even when intelligibility is unaffected. In this study, we used a dual-task paradigm and pupillometry to assess the cognitive costs associated with processing fully intelligible accented speech, predicting that rapid perceptual adaptation to an accent would result in decreased listening effort over time. The behavioural and physiological paradigms provided converging evidence that listeners expend greater effort when processing nonnative- relative to native-accented speech, and both experiments also revealed an overall reduction in listening effort over the course of the experiment. Only the pupillometry experiment, however, revealed greater adaptation to nonnative- relative to native-accented speech. An exploratory analysis of the dual-task data that attempted to minimise practice effects revealed weak evidence for greater adaptation to the nonnative accent. These results suggest that even when speech is fully intelligible, resolving deviations between the acoustic input and stored lexical representations incurs a processing cost, and adaptation may attenuate this cost.


2019 ◽  
Vol 28 (3S) ◽  
pp. 756-761 ◽  
Author(s):  
Fatima Tangkhpanya ◽  
Morgane Le Carrour ◽  
Félicia Doucet ◽  
Jean-Pierre Gagné

Speech processing is more effortful under difficult listening conditions. Using a dual-task paradigm, it has been shown that older adults deploy more listening effort than younger adults when performing a speech recognition task in noise. Purpose The primary purpose of this study was to investigate whether a dual-task paradigm could be used to investigate differences in listening effort for an audiovisual speech comprehension task. If so, it was predicted that older adults would expend more listening effort than younger adults. Method Three groups of participants took part in the investigation: (a) young normal-hearing adults, (b) young normal-hearing adults listening to the speech material low-pass filtered above 3 kHz, and (c) older adults with age-related normal hearing sensitivity or better. A dual-task paradigm was used to measure listening effort. The primary task consisted of comprehending a short documentary presented at 63 dBA in a background noise that consisted of a 4-talker speech babble presented at 69 dBA. The participants had to answer a set of 15 questions related to the content of the documentary. The secondary task was a tactile detection task presented at a random time interval, over a 12-min period (approximately 8 stimuli/min). Each task was performed separately and concurrently. Results The younger participants who performed the listening task under the low-pass filtered condition displayed significantly more listening effort than the 2 other groups of participants. Conclusion First, the study confirmed that the dual-task paradigm used in this study was sufficiently sensitive to reveal significant differences in listening effort for a speech comprehension task across 3 groups of participants. Contrary to our prediction, it was the group of young normal-hearing participants who listened to the documentaries under the low-pass filtered condition that displayed significantly more listening effort than the other 2 groups of listeners.


2019 ◽  
Author(s):  
Violet Aurora Brown ◽  
Drew Jordan McLaughlin ◽  
Julia Feld Strand ◽  
Kristin J. Van Engen

In noisy settings or when listening to an unfamiliar talker or accent, it may be difficult to recognize spoken language. This difficulty typically results in reductions in speech intelligibility, but may also increase the effort necessary to process the speech. In the current study, we used a dual-task paradigm and pupillometry to assess the cognitive costs associated with processing fully intelligible accented speech, predicting that rapid perceptual adaptation to an accent would result in decreased listening effort over time. Both paradigms revealed greater effort for nonnative- relative to native-accented speech, as well as an overall reduction in listening effort over the course of the experiment, but only the pupillometry experiment revealed greater adaptation to nonnative- relative to native-accented speech. An exploratory analysis of the dual-task data that attempted to minimize practice effects, however, revealed weak evidence for greater adaptation to the nonnative accent. These results suggest that even when speech is fully intelligible, resolving deviations between the acoustic input and stored lexical representations incurs a processing cost, and adaptation may attenuate this cost.


2020 ◽  
Vol 63 (6) ◽  
pp. 1979-1989
Author(s):  
Ilze Oosthuizen ◽  
Erin M. Picou ◽  
Lidia Pottas ◽  
Hermanus Carel Myburgh ◽  
De Wet Swanepoel

Purpose It is not clear if behavioral indices of listening effort are sensitive to changes in signal-to-noise ratio (SNR) for young children (7–12 years old) from multilingual backgrounds. The purpose of this study was to explore the effects of SNR on listening effort in multilingual school-aged children (native English, nonnative English) as measured with a single- and a dual-task paradigm with low-linguistic speech stimuli (digits). The study also aimed to explore age effects on digit triplet recognition and response times (RTs). Method Sixty children with normal hearing participated, 30 per language group. Participants completed single and dual tasks in three SNRs (quiet, −10 dB, and −15 dB). Speech stimuli for both tasks were digit triplets. Verbal RTs were the listening effort measure during the single-task paradigm. A visual monitoring task was the secondary task during the dual-task paradigm. Results Significant effects of SNR on RTs were evident during both single- and dual-task paradigms. As expected, language background did not affect the pattern of RTs. The data also demonstrate a maturation effect for triplet recognition during both tasks and for RTs during the dual-task only. Conclusions Both single- and dual-task paradigms were sensitive to changes in SNR for school-aged children between 7 and 12 years of age. Language background (English as native language vs. English as nonnative language) had no significant effect on triplet recognition or RTs, demonstrating practical utility of low-linguistic stimuli for testing children from multilingual backgrounds.


2019 ◽  
Author(s):  
Violet Aurora Brown ◽  
Julia Feld Strand ◽  
Kristin J. Van Engen

Objectives. Perceiving spoken language in noise can be a cognitively demanding task, particularly for older adults and those with hearing impairment. The current research assessed whether an abstract visual stimulus—a circle that modulates with the acoustic amplitude envelope of the speech—can affect speech processing in older adults. We hypothesized that, in line with recent research on younger adults, the circle would reduce listening effort during a word identification task. Given that older adults have slower processing speeds and poorer auditory temporal sensitivity than young adults, we expected that the abstract visual stimulus may have additional benefits for older adults, as it provides another source of information to compensate for limitations in auditory processing. Thus, we further hypothesized that, in contrast to the results from research on young adults, the circle would also improve word identification in noise for older adults.Design. Sixty-five older adults ages 65 to 83 (M = 71.11; SD = 4.01) with age-appropriate hearing completed four blocks of trials: two blocks (one with the modulating circle, one without) with a word identification task in two-talker babble, followed by two more word identification blocks that also included a simultaneous dual-task paradigm to assess listening effort.Results. Relative to an audio-only condition, the presence of the modulating circle substantially reduced listening effort (as indicated by faster responses to the secondary task in the dual-task paradigm) and also moderately improved spoken word intelligibility. Conclusions. Seeing the face of the talker substantially improves spoken word identification, but this is the first demonstration that another form of visual input—an abstract modulating circle—can also provide modest intelligibility benefits and substantial reductions in listening effort. These findings could have clinical or practical applications, as the modulating circle can be generated in real time to accompany speech in noisy situations, thereby improving speech intelligibility and reducing effort or fatigue for individuals who may have particular difficulty recognizing speech in background noise.


Sign in / Sign up

Export Citation Format

Share Document