scholarly journals The self-advantage in visual speech processing enhances audiovisual speech recognition in noise

2014 ◽  
Vol 22 (4) ◽  
pp. 1048-1053 ◽  
Author(s):  
Nancy Tye-Murray ◽  
Brent P. Spehar ◽  
Joel Myerson ◽  
Sandra Hale ◽  
Mitchell S. Sommers
2019 ◽  
Author(s):  
Jonathan Henry Venezia ◽  
Robert Sandlin ◽  
Leon Wojno ◽  
Anthony Duc Tran ◽  
Gregory Hickok ◽  
...  

Static and dynamic visual speech cues contribute to audiovisual (AV) speech recognition in noise. Static cues (e.g., “lipreading”) provide complementary information that enables perceivers to ascertain ambiguous acoustic-phonetic content. The role of dynamic cues is less clear, but one suggestion is that temporal covariation between facial motion trajectories and the speech envelope enables perceivers to recover a more robust representation of the time-varying acoustic signal. Modeling studies show this is computationally feasible, though it has not been confirmed experimentally. We conducted two experiments to determine whether AV speech recognition depends on the magnitude of cross-sensory temporal coherence (AVC). In Experiment 1, sentence-keyword recognition in steady-state noise (SSN) was assessed across a range of signal-to-noise ratios (SNRs) for auditory and AV speech. The auditory signal was unprocessed or filtered to remove 3-7 Hz temporal modulations. Filtering severely reduced AVC (magnitude-squared coherence of lip trajectories with cochlear-narrowband speech envelopes), but did not reduce the magnitude of the AV advantage (AV > A; ~ 4 dB). This did not depend on the presence of static cues, manipulated via facial blurring. Experiment 2 assessed AV speech recognition in SSN at a fixed SNR (-10.5 dB) for subsets of Exp. 1 stimuli with naturally high or low AVC. A small effect (~ 5% correct; high-AVC > low-AVC) was observed. A computational model of AV speech intelligibility based on AVC yielded good overall predictions of performance, but over-predicted the differential effects of AVC. These results suggest the role and/or computational characterization of AVC must be re-conceptualized.


2021 ◽  
Author(s):  
Daniel Senkowski ◽  
James K. Moran

AbstractObjectivesPeople with Schizophrenia (SZ) show deficits in auditory and audiovisual speech recognition. It is possible that these deficits are related to aberrant early sensory processing, combined with an impaired ability to utilize visual cues to improve speech recognition. In this electroencephalography study we tested this by having SZ and healthy controls (HC) identify different unisensory auditory and bisensory audiovisual syllables at different auditory noise levels.MethodsSZ (N = 24) and HC (N = 21) identified one of three different syllables (/da/, /ga/, /ta/) at three different noise levels (no, low, high). Half the trials were unisensory auditory and the other half provided additional visual input of moving lips. Task-evoked mediofrontal N1 and P2 brain potentials triggered to the onset of the auditory syllables were derived and related to behavioral performance.ResultsIn comparison to HC, SZ showed speech recognition deficits for unisensory and bisensory stimuli. These deficits were primarily found in the no noise condition. Paralleling these observations, reduced N1 amplitudes to unisensory and bisensory stimuli in SZ were found in the no noise condition. In HC the N1 amplitudes were positively related to the speech recognition performance, whereas no such relationships were found in SZ. Moreover, no group differences in multisensory speech recognition benefits and N1 suppression effects for bisensory stimuli were observed.ConclusionOur study shows that reduced N1 amplitudes relate to auditory and audiovisual speech processing deficits in SZ. The findings that the amplitude effects were confined to salient speech stimuli and the attenuated relationship with behavioral performance, compared to HC, indicates a diminished decoding of the auditory speech signals in SZs. Our study also revealed intact multisensory benefits in SZs, which indicates that the observed auditory and audiovisual speech recognition deficits were primarily related to aberrant auditory speech processing.HighlightsSpeech processing deficits in schizophrenia related to reduced N1 amplitudes Audiovisual suppression effect in N1 preserved in schizophrenia Schizophrenia showed weakened P2 components in specifically audiovisual processing


2016 ◽  
Vol 31 (4) ◽  
pp. 380-389 ◽  
Author(s):  
Nancy Tye-Murray ◽  
Brent Spehar ◽  
Joel Myerson ◽  
Sandra Hale ◽  
Mitchell Sommers

2005 ◽  
Vol 118 (3) ◽  
pp. 1962-1962 ◽  
Author(s):  
Rachael Holt ◽  
Karen Kirk ◽  
David Pisoni ◽  
Lisa Burckhartzmeyer ◽  
Anna Lin

Sign in / Sign up

Export Citation Format

Share Document