scholarly journals Audio-visual speech in noise perception in dyslexia

2016 ◽  
Vol 21 (1) ◽  
pp. e12504 ◽  
Author(s):  
Thijs van Laarhoven ◽  
Mirjam Keetels ◽  
Lemmy Schakel ◽  
Jean Vroomen
2016 ◽  
Vol 59 (1) ◽  
pp. 1-14 ◽  
Author(s):  
Victoria C. P. Knowland ◽  
Sam Evans ◽  
Caroline Snell ◽  
Stuart Rosen

Purpose The purpose of the study was to assess the ability of children with developmental language learning impairments (LLIs) to use visual speech cues from the talking face. Method In this cross-sectional study, 41 typically developing children (mean age: 8 years 0 months, range: 4 years 5 months to 11 years 10 months) and 27 children with diagnosed LLI (mean age: 8 years 10 months, range: 5 years 2 months to 11 years 6 months) completed a silent speechreading task and a speech-in-noise task with and without visual support from the talking face. The speech-in-noise task involved the identification of a target word in a carrier sentence with a single competing speaker as a masker. Results Children in the LLI group showed a deficit in speechreading when compared with their typically developing peers. Beyond the single-word level, this deficit became more apparent in older children. On the speech-in-noise task, a substantial benefit of visual cues was found regardless of age or group membership, although the LLI group showed an overall developmental delay in speech perception. Conclusion Although children with LLI were less accurate than their peers on the speechreading and speech-in noise-tasks, both groups were able to make equivalent use of visual cues to boost performance accuracy when listening in noise.


Author(s):  
Julie Beadle ◽  
Jeesun Kim ◽  
Chris Davis

Purpose: Listeners understand significantly more speech in noise when the talker's face can be seen (visual speech) in comparison to an auditory-only baseline (a visual speech benefit). This study investigated whether the visual speech benefit is reduced when the correspondence between auditory and visual speech is uncertain and whether any reduction is affected by listener age (older vs. younger) and how severe the auditory signal is masked. Method: Older and younger adults completed a speech recognition in noise task that included an auditory-only condition and four auditory–visual (AV) conditions in which one, two, four, or six silent talking face videos were presented. One face always matched the auditory signal; the other face(s) did not. Auditory speech was presented in noise at −6 and −1 dB signal-to-noise ratio (SNR). Results: When the SNR was −6 dB, for both age groups, the standard-sized visual speech benefit reduced as more talking faces were presented. When the SNR was −1 dB, younger adults received the standard-sized visual speech benefit even when two talking faces were presented, whereas older adults did not. Conclusions: The size of the visual speech benefit obtained by older adults was always smaller when AV correspondence was uncertain; this was not the case for younger adults. Difficulty establishing AV correspondence may be a factor that limits older adults' speech recognition in noisy AV environments. Supplemental Material https://doi.org/10.23641/asha.16879549


1996 ◽  
Vol 39 (6) ◽  
pp. 1159-1170 ◽  
Author(s):  
Lawrence D. Rosenblum ◽  
Jennifer A. Johnson ◽  
Helena M. Saldaña

Seeing a talker's face can improve the perception of speech in noise. There is little known about which characteristics of the face are useful for enhancing the degraded signal. In this study, a point-light technique was employed to help isolate the salient kinematic aspects of a visible articulating face. In this technique, fluorescent dots were arranged on the lips, teeth, tongue, cheeks, and jaw of an actor. The actor was videotaped speaking in the dark, so that when shown to observers, only the moving dots were seen. To test whether these reduced images could contribute to the perception of degraded speech, noise-embedded sentences were dubbed with the point-light images at various signal-to-noise ratios. It was found that these images could significantly improve comprehension for adults with normal hearing and that the images became more effective as participants gained experience with the stimuli. These results have implications for uncovering salient visual speech information as well as in the development of telecommunication systems for listeners who are hearing impaired.


2019 ◽  
Vol 62 (10) ◽  
pp. 3860-3875 ◽  
Author(s):  
Kaylah Lalonde ◽  
Lynne A. Werner

Purpose This study assessed the extent to which 6- to 8.5-month-old infants and 18- to 30-year-old adults detect and discriminate auditory syllables in noise better in the presence of visual speech than in auditory-only conditions. In addition, we examined whether visual cues to the onset and offset of the auditory signal account for this benefit. Method Sixty infants and 24 adults were randomly assigned to speech detection or discrimination tasks and were tested using a modified observer-based psychoacoustic procedure. Each participant completed 1–3 conditions: auditory-only, with visual speech, and with a visual signal that only cued the onset and offset of the auditory syllable. Results Mixed linear modeling indicated that infants and adults benefited from visual speech on both tasks. Adults relied on the onset–offset cue for detection, but the same cue did not improve their discrimination. The onset–offset cue benefited infants for both detection and discrimination. Whereas the onset–offset cue improved detection similarly for infants and adults, the full visual speech signal benefited infants to a lesser extent than adults on the discrimination task. Conclusions These results suggest that infants' use of visual onset–offset cues is mature, but their ability to use more complex visual speech cues is still developing. Additional research is needed to explore differences in audiovisual enhancement (a) of speech discrimination across speech targets and (b) with increasingly complex tasks and stimuli.


2020 ◽  
Author(s):  
Aisling E. O’Sullivan ◽  
Michael J. Crosse ◽  
Giovanni M. Di Liberto ◽  
Alain de Cheveigné ◽  
Edmund C. Lalor

AbstractSeeing a speaker’s face benefits speech comprehension, especially in challenging listening conditions. This perceptual benefit is thought to stem from the neural integration of visual and auditory speech at multiple stages of processing, whereby movement of a speaker’s face provides temporal cues to auditory cortex, and articulatory information from the speaker’s mouth can aid recognizing specific linguistic units (e.g., phonemes, syllables). However it remains unclear how the integration of these cues varies as a function of listening conditions. Here we sought to provide insight on these questions by examining EEG responses to natural audiovisual, audio, and visual speech in quiet and in noise. Specifically, we represented our speech stimuli in terms of their spectrograms and their phonetic features, and then quantified the strength of the encoding of those features in the EEG using canonical correlation analysis. The encoding of both spectrotemporal and phonetic features was shown to be more robust in audiovisual speech responses then what would have been expected from the summation of the audio and visual speech responses, consistent with the literature on multisensory integration. Furthermore, the strength of this multisensory enhancement was more pronounced at the level of phonetic processing for speech in noise relative to speech in quiet, indicating that listeners rely more on articulatory details from visual speech in challenging listening conditions. These findings support the notion that the integration of audio and visual speech is a flexible, multistage process that adapts to optimize comprehension based on the current listening conditions.Significance StatementDuring conversation, visual cues impact our perception of speech. Integration of auditory and visual speech is thought to occur at multiple stages of speech processing and vary flexibly depending on the listening conditions. Here we examine audiovisual integration at two stages of speech processing using the speech spectrogram and a phonetic representation, and test how audiovisual integration adapts to degraded listening conditions. We find significant integration at both of these stages regardless of listening conditions, and when the speech is noisy, we find enhanced integration at the phonetic stage of processing. These findings provide support for the multistage integration framework and demonstrate its flexibility in terms of a greater reliance on visual articulatory information in challenging listening conditions.


Author(s):  
Florian Destoky ◽  
Julie Bertels ◽  
Maxime Niesen ◽  
Vincent Wens ◽  
Marc Vander Ghinst ◽  
...  

AbstractHumans’ propensity to acquire literacy relates to several factors, among which, the ability to understand speech in noise (SiN). Still, the nature of the relation between reading and SiN perception abilities remains poorly understood. Here, we dissect the interplay between (i) reading abilities, (ii) classical behavioral predictors of reading (phonological awareness, phonological memory and lexical access), and (iii) electrophysiological markers of SiN perception in 99 elementary school children (26 with dyslexia). We demonstrate that cortical representation of phrasal content of SiN relates to the development of the lexical (but not sublexical) reading strategy. In contrast, classical behavioral predictors of reading abilities and the ability to benefit from visual speech to represent the syllabic content of SiN account for global reading performance (i.e., speed and accuracy of lexical and sublexical reading). Finally, we found that individuals with dyslexia properly integrate visual speech information to optimize processing of syntactic information, but not to sustain acoustic/phonemic processing. These results clarify the nature of the relation between SiN perception and reading abilities in typical and dyslexic child readers, and identified novel electrophysiological markers of emergent literacy.


2020 ◽  
Author(s):  
Anne-Marie Muller ◽  
Tyler C. Dalal ◽  
Ryan A Stevenson

Multisensory integration, the process by which sensory information from different sensory modalities are bound together, is hypothesized to contribute to perceptual symptomatology in schizophrenia, including hallucinations and aberrant speech perception. Differences in multisensory integration and temporal processing, an important component of multisensory integration, have been consistently found among individuals with schizophrenia. Evidence is emerging that these differences extend across the schizophrenia spectrum, including individuals in the general population with higher levels of schizotypal traits. In the current study, we measured (1) multisensory integration using an audiovisual speech-in-noise task, and the McGurk task. Using the speech-in-noise task, we assessed (2) susceptibility to distracting auditory speech to test the hypothesis that increased perception of distracting speech that is subsequently bound with mismatching visual speech contributes to hallucination-like experiences. As a measure of (3) temporal processing, we used the ternary synchrony judgment task. We measured schizotypal traits using the Schizotypal Personality Questionnaire (SPQ), hypothesizing that higher levels of schizotypal traits, specifically Unusual Perceptual Experiences and Odd Speech subscales, would be associated with (1) decreased multisensory integration, (2) increased susceptibility to distracting auditory speech, and (3) less precise temporal processing. Surprisingly, neither subscales were associated with any of the measures. These results suggest that these perceptual differences may not be present across the schizophrenia spectrum.


Sign in / Sign up

Export Citation Format

Share Document