Evaluating the Effort Expended to Understand Speech in Noise Using a Dual-Task Paradigm: The Effects of Providing Visual Speech Cues

2010 ◽  
Vol 53 (1) ◽  
pp. 18-33 ◽  
Author(s):  
Sarah Fraser ◽  
Jean-Pierre Gagné ◽  
Majolaine Alepins ◽  
Pascale Dubois
2019 ◽  
Vol 62 (10) ◽  
pp. 3860-3875 ◽  
Author(s):  
Kaylah Lalonde ◽  
Lynne A. Werner

Purpose This study assessed the extent to which 6- to 8.5-month-old infants and 18- to 30-year-old adults detect and discriminate auditory syllables in noise better in the presence of visual speech than in auditory-only conditions. In addition, we examined whether visual cues to the onset and offset of the auditory signal account for this benefit. Method Sixty infants and 24 adults were randomly assigned to speech detection or discrimination tasks and were tested using a modified observer-based psychoacoustic procedure. Each participant completed 1–3 conditions: auditory-only, with visual speech, and with a visual signal that only cued the onset and offset of the auditory syllable. Results Mixed linear modeling indicated that infants and adults benefited from visual speech on both tasks. Adults relied on the onset–offset cue for detection, but the same cue did not improve their discrimination. The onset–offset cue benefited infants for both detection and discrimination. Whereas the onset–offset cue improved detection similarly for infants and adults, the full visual speech signal benefited infants to a lesser extent than adults on the discrimination task. Conclusions These results suggest that infants' use of visual onset–offset cues is mature, but their ability to use more complex visual speech cues is still developing. Additional research is needed to explore differences in audiovisual enhancement (a) of speech discrimination across speech targets and (b) with increasingly complex tasks and stimuli.


2019 ◽  
Author(s):  
Stefan Huijser ◽  
Niels Anne Taatgen ◽  
Marieke K. van Vugt

Preparing for the future during ongoing activities is an essential skill. Yet, it is currently unclear to what extent we can prepare for the future in parallel with another task. In two experiments, we investigated how characteristics of a present task influenced whether and when participants prepared for the future, as well as its usefulness. We focused on the influence of concurrent working memory load, assuming that working memory would interfere most strongly with preparation. In both experiments, participants performed a novel sequential dual-task paradigm, in which they could voluntary prepare for a second task while performing a first task. We identified task preparation by means of eye tracking, through detecting when participants switched their gaze from the first to the second task. The results showed that participants prepared productively, as evidenced by faster RTs on the second task, with only a small cost to the present task. The probability of preparation and its productiveness decreased with general increases in present task difficulty. In contrast to our prediction, we found some but no consistent support for influence of concurrent working memory load on preparation. Only for concurrent high working memory load (i.e., two items in memory), we observed strong interference with preparation. We conclude that preparation is affected by present task difficulty, potentially due to decreased opportunities for preparation and changes in multitasking strategy. Furthermore, the interference from holding two items may reflect that concurrent preparation is compromised when working memory integration is required by both processes.


1997 ◽  
Vol 40 (2) ◽  
pp. 432-443 ◽  
Author(s):  
Karen S. Helfer

Research has shown that speaking in a deliberately clear manner can improve the accuracy of auditory speech recognition. Allowing listeners access to visual speech cues also enhances speech understanding. Whether the nature of information provided by speaking clearly and by using visual speech cues is redundant has not been determined. This study examined how speaking mode (clear vs. conversational) and presentation mode (auditory vs. auditory-visual) influenced the perception of words within nonsense sentences. In Experiment 1, 30 young listeners with normal hearing responded to videotaped stimuli presented audiovisually in the presence of background noise at one of three signal-to-noise ratios. In Experiment 2, 9 participants returned for an additional assessment using auditory-only presentation. Results of these experiments showed significant effects of speaking mode (clear speech was easier to understand than was conversational speech) and presentation mode (auditoryvisual presentation led to better performance than did auditory-only presentation). The benefit of clear speech was greater for words occurring in the middle of sentences than for words at either the beginning or end of sentences for both auditory-only and auditory-visual presentation, whereas the greatest benefit from supplying visual cues was for words at the end of sentences spoken both clearly and conversationally. The total benefit from speaking clearly and supplying visual cues was equal to the sum of each of these effects. Overall, the results suggest that speaking clearly and providing visual speech information provide complementary (rather than redundant) information.


2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Sofie Degeest ◽  
Katrien Kestens ◽  
Hannah Keppler

Author(s):  
Sangheeta Roy ◽  
Oishee Mazumder ◽  
Kingshuk Chakravarty ◽  
Debatri Chatterjee ◽  
Aniruddha Sinha

2018 ◽  
Vol 37 (7) ◽  
pp. 772-778
Author(s):  
Ting-Ting Yeh ◽  
Hsiao-Yun Chang ◽  
Yan-Ying Ju ◽  
Hui-Ya Chen

Sign in / Sign up

Export Citation Format

Share Document