scholarly journals Rapid adaptation to fully intelligible nonnative-accented speech reduces listening effort

2020 ◽  
Vol 73 (9) ◽  
pp. 1431-1443 ◽  
Author(s):  
Violet A Brown ◽  
Drew J McLaughlin ◽  
Julia F Strand ◽  
Kristin J Van Engen

In noisy settings or when listening to an unfamiliar talker or accent, it can be difficult to understand spoken language. This difficulty typically results in reductions in speech intelligibility, but may also increase the effort necessary to process the speech even when intelligibility is unaffected. In this study, we used a dual-task paradigm and pupillometry to assess the cognitive costs associated with processing fully intelligible accented speech, predicting that rapid perceptual adaptation to an accent would result in decreased listening effort over time. The behavioural and physiological paradigms provided converging evidence that listeners expend greater effort when processing nonnative- relative to native-accented speech, and both experiments also revealed an overall reduction in listening effort over the course of the experiment. Only the pupillometry experiment, however, revealed greater adaptation to nonnative- relative to native-accented speech. An exploratory analysis of the dual-task data that attempted to minimise practice effects revealed weak evidence for greater adaptation to the nonnative accent. These results suggest that even when speech is fully intelligible, resolving deviations between the acoustic input and stored lexical representations incurs a processing cost, and adaptation may attenuate this cost.

2019 ◽  
Author(s):  
Violet Aurora Brown ◽  
Drew Jordan McLaughlin ◽  
Julia Feld Strand ◽  
Kristin J. Van Engen

In noisy settings or when listening to an unfamiliar talker or accent, it may be difficult to recognize spoken language. This difficulty typically results in reductions in speech intelligibility, but may also increase the effort necessary to process the speech. In the current study, we used a dual-task paradigm and pupillometry to assess the cognitive costs associated with processing fully intelligible accented speech, predicting that rapid perceptual adaptation to an accent would result in decreased listening effort over time. Both paradigms revealed greater effort for nonnative- relative to native-accented speech, as well as an overall reduction in listening effort over the course of the experiment, but only the pupillometry experiment revealed greater adaptation to nonnative- relative to native-accented speech. An exploratory analysis of the dual-task data that attempted to minimize practice effects, however, revealed weak evidence for greater adaptation to the nonnative accent. These results suggest that even when speech is fully intelligible, resolving deviations between the acoustic input and stored lexical representations incurs a processing cost, and adaptation may attenuate this cost.


2019 ◽  
Author(s):  
Violet Aurora Brown ◽  
Julia Feld Strand ◽  
Kristin J. Van Engen

Objectives. Perceiving spoken language in noise can be a cognitively demanding task, particularly for older adults and those with hearing impairment. The current research assessed whether an abstract visual stimulus—a circle that modulates with the acoustic amplitude envelope of the speech—can affect speech processing in older adults. We hypothesized that, in line with recent research on younger adults, the circle would reduce listening effort during a word identification task. Given that older adults have slower processing speeds and poorer auditory temporal sensitivity than young adults, we expected that the abstract visual stimulus may have additional benefits for older adults, as it provides another source of information to compensate for limitations in auditory processing. Thus, we further hypothesized that, in contrast to the results from research on young adults, the circle would also improve word identification in noise for older adults.Design. Sixty-five older adults ages 65 to 83 (M = 71.11; SD = 4.01) with age-appropriate hearing completed four blocks of trials: two blocks (one with the modulating circle, one without) with a word identification task in two-talker babble, followed by two more word identification blocks that also included a simultaneous dual-task paradigm to assess listening effort.Results. Relative to an audio-only condition, the presence of the modulating circle substantially reduced listening effort (as indicated by faster responses to the secondary task in the dual-task paradigm) and also moderately improved spoken word intelligibility. Conclusions. Seeing the face of the talker substantially improves spoken word identification, but this is the first demonstration that another form of visual input—an abstract modulating circle—can also provide modest intelligibility benefits and substantial reductions in listening effort. These findings could have clinical or practical applications, as the modulating circle can be generated in real time to accompany speech in noisy situations, thereby improving speech intelligibility and reducing effort or fatigue for individuals who may have particular difficulty recognizing speech in background noise.


2013 ◽  
Vol 56 (4) ◽  
pp. 1075-1084 ◽  
Author(s):  
Carina Pals ◽  
Anastasios Sarampalis ◽  
Deniz Başkent

Purpose Fitting a cochlear implant (CI) for optimal speech perception does not necessarily optimize listening effort. This study aimed to show that listening effort may change between CI processing conditions for which speech intelligibility remains constant. Method Nineteen normal-hearing participants listened to CI simulations with varying numbers of spectral channels. A dual-task paradigm combining an intelligibility task with either a linguistic or nonlinguistic visual response-time (RT) task measured intelligibility and listening effort. The simultaneously performed tasks compete for limited cognitive resources; changes in effort associated with the intelligibility task are reflected in changes in RT on the visual task. A separate self-report scale provided a subjective measure of listening effort. Results All measures showed significant improvements with increasing spectral resolution up to 6 channels. However, only the RT measure of listening effort continued improving up to 8 channels. The effects were stronger for RTs recorded during listening than for RTs recorded between listening. Conclusion The results suggest that listening effort decreases with increased spectral resolution. Moreover, these improvements are best reflected in objective measures of listening effort, such as RTs on a secondary task, rather than intelligibility scores or subjective effort measures.


2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Sofie Degeest ◽  
Katrien Kestens ◽  
Hannah Keppler

2017 ◽  
Vol 21 ◽  
pp. 233121651668728 ◽  
Author(s):  
Jean-Pierre Gagné ◽  
Jana Besser ◽  
Ulrike Lemke

2019 ◽  
Author(s):  
Violet Aurora Brown ◽  
Julia Feld Strand

It is widely accepted that seeing a talker improves a listener’s ability to understand what a talker is saying in background noise (e.g., Erber, 1969; Sumby & Pollack, 1954). The literature is mixed, however, regarding the influence of the visual modality on the listening effort required to recognize speech (e.g., Fraser, Gagné, Alepins, & Dubois, 2010; Sommers & Phelps, 2016). Here, we present data showing that even when the visual modality robustly benefits recognition, processing audiovisual speech can still result in greater cognitive load than processing speech in the auditory modality alone. We show using a dual-task paradigm that the costs associated with audiovisual speech processing are more pronounced in easy listening conditions, in which speech can be recognized at high rates in the auditory modality alone—indeed, effort did not differ between audiovisual and audio-only conditions when the background noise was presented at a more difficult level. Further, we show that though these effects replicate with different stimuli and participants, they do not emerge when effort is assessed with a recall paradigm rather than a dual-task paradigm. Together, these results suggest that the widely cited audiovisual recognition benefit may come at a cost under more favorable listening conditions, and add to the growing body of research suggesting that various measures of effort may not be tapping into the same underlying construct (Strand et al., 2018).


2019 ◽  
Vol 62 (4) ◽  
pp. 1068-1081 ◽  
Author(s):  
Z. Ellen Peng ◽  
Lily M. Wang

Purpose Understanding speech in complex realistic acoustic environments requires effort. In everyday listening situations, speech quality is often degraded due to adverse acoustics, such as excessive background noise level (BNL) and reverberation time (RT), or talker characteristics such as foreign accent ( Mattys, Davis, Bradlow, & Scott, 2012 ). In addition to factors affecting the quality of the input acoustic signals, listeners' individual characteristics such as language abilities can also make it more difficult and effortful to understand speech. Based on the Framework for Understanding Effortful Listening ( Pichora-Fuller et al., 2016 ), factors such as adverse acoustics, talker accent, and listener language abilities can all contribute to increasing listening effort. In this study, using both a dual-task paradigm and a self-report questionnaire, we seek to understand listening effort in a wide range of realistic classroom acoustic conditions as well as varying talker accent and listener English proficiency. Method One hundred fifteen native and nonnative adult listeners with normal hearing were tested in a dual task of speech comprehension and adaptive pursuit rotor (APR) under 15 acoustic conditions from combinations of BNLs and RTs. Listeners provided responses on the NASA Task Load Index (TLX) questionnaire immediately after completing the dual task under each acoustic condition. The NASA TLX surveyed 6 dimensions of perceived listening effort: mental demand, physical demand, temporal demand, effort, frustration, and perceived performance. Fifty-six listeners were tested with speech produced by native American English talkers; the other 59 listeners, with speech from native Mandarin Chinese talkers. Based on their 1st language learned during childhood, 3 groups of listeners were recruited: listeners who were native English speakers, native Mandarin Chinese speakers, and native speakers of other languages (e.g., Hindu, Korean, and Portuguese). Results Listening effort was measured objectively through the APR task performance and subjectively using the NASA TLX questionnaire. Performance on the APR task did not vary with changing acoustic conditions, but it did suggest increased listening effort for native listeners of other languages compared to the 2 other listener groups. From the NASA TLX, listeners reported feeling more frustrated and less successful in understanding Chinese-accented speech. Nonnative listeners reported more listening effort (i.e., physical demand, temporal demand, and effort) than native listeners in speech comprehension under adverse acoustics. When listeners' English proficiency was controlled, higher BNL was strongly related to a decrease in perceived performance, whereas such relationship with RT was much weaker. Nonnative listeners who shared the foreign talkers' accent reported no change in listening effort, whereas other listeners reported more difficulty in understanding the accented speech. Conclusions Adverse acoustics required more effortful listening as measured subjectively with a self-report NASA TLX. This subjective scale was more sensitive than a dual task that involved speech comprehension, which was beyond sentence recall. It was better at capturing the negative impacts on listening effort from acoustic factors (i.e., both BNL and RT), talker accent, and listener language abilities.


2019 ◽  
Vol 28 (3S) ◽  
pp. 756-761 ◽  
Author(s):  
Fatima Tangkhpanya ◽  
Morgane Le Carrour ◽  
Félicia Doucet ◽  
Jean-Pierre Gagné

Speech processing is more effortful under difficult listening conditions. Using a dual-task paradigm, it has been shown that older adults deploy more listening effort than younger adults when performing a speech recognition task in noise. Purpose The primary purpose of this study was to investigate whether a dual-task paradigm could be used to investigate differences in listening effort for an audiovisual speech comprehension task. If so, it was predicted that older adults would expend more listening effort than younger adults. Method Three groups of participants took part in the investigation: (a) young normal-hearing adults, (b) young normal-hearing adults listening to the speech material low-pass filtered above 3 kHz, and (c) older adults with age-related normal hearing sensitivity or better. A dual-task paradigm was used to measure listening effort. The primary task consisted of comprehending a short documentary presented at 63 dBA in a background noise that consisted of a 4-talker speech babble presented at 69 dBA. The participants had to answer a set of 15 questions related to the content of the documentary. The secondary task was a tactile detection task presented at a random time interval, over a 12-min period (approximately 8 stimuli/min). Each task was performed separately and concurrently. Results The younger participants who performed the listening task under the low-pass filtered condition displayed significantly more listening effort than the 2 other groups of participants. Conclusion First, the study confirmed that the dual-task paradigm used in this study was sufficiently sensitive to reveal significant differences in listening effort for a speech comprehension task across 3 groups of participants. Contrary to our prediction, it was the group of young normal-hearing participants who listened to the documentaries under the low-pass filtered condition that displayed significantly more listening effort than the other 2 groups of listeners.


Sign in / Sign up

Export Citation Format

Share Document