scholarly journals Early evoked brain activity underlies auditory and audiovisual speech recognition deficits in schizophrenia

2021 ◽  
Author(s):  
Daniel Senkowski ◽  
James K. Moran

AbstractObjectivesPeople with Schizophrenia (SZ) show deficits in auditory and audiovisual speech recognition. It is possible that these deficits are related to aberrant early sensory processing, combined with an impaired ability to utilize visual cues to improve speech recognition. In this electroencephalography study we tested this by having SZ and healthy controls (HC) identify different unisensory auditory and bisensory audiovisual syllables at different auditory noise levels.MethodsSZ (N = 24) and HC (N = 21) identified one of three different syllables (/da/, /ga/, /ta/) at three different noise levels (no, low, high). Half the trials were unisensory auditory and the other half provided additional visual input of moving lips. Task-evoked mediofrontal N1 and P2 brain potentials triggered to the onset of the auditory syllables were derived and related to behavioral performance.ResultsIn comparison to HC, SZ showed speech recognition deficits for unisensory and bisensory stimuli. These deficits were primarily found in the no noise condition. Paralleling these observations, reduced N1 amplitudes to unisensory and bisensory stimuli in SZ were found in the no noise condition. In HC the N1 amplitudes were positively related to the speech recognition performance, whereas no such relationships were found in SZ. Moreover, no group differences in multisensory speech recognition benefits and N1 suppression effects for bisensory stimuli were observed.ConclusionOur study shows that reduced N1 amplitudes relate to auditory and audiovisual speech processing deficits in SZ. The findings that the amplitude effects were confined to salient speech stimuli and the attenuated relationship with behavioral performance, compared to HC, indicates a diminished decoding of the auditory speech signals in SZs. Our study also revealed intact multisensory benefits in SZs, which indicates that the observed auditory and audiovisual speech recognition deficits were primarily related to aberrant auditory speech processing.HighlightsSpeech processing deficits in schizophrenia related to reduced N1 amplitudes Audiovisual suppression effect in N1 preserved in schizophrenia Schizophrenia showed weakened P2 components in specifically audiovisual processing

2020 ◽  
Vol 24 ◽  
pp. 233121652096060
Author(s):  
Anna R. Tinnemore ◽  
Sandra Gordon-Salant ◽  
Matthew J. Goupell

Speech recognition in complex environments involves focusing on the most relevant speech signal while ignoring distractions. Difficulties can arise due to the incoming signal’s characteristics (e.g., accented pronunciation, background noise, distortion) or the listener’s characteristics (e.g., hearing loss, advancing age, cognitive abilities). Listeners who use cochlear implants (CIs) must overcome these difficulties while listening to an impoverished version of the signals available to listeners with normal hearing (NH). In the real world, listeners often attempt tasks concurrent with, but unrelated to, speech recognition. This study sought to reveal the effects of visual distraction and performing a simultaneous visual task on audiovisual speech recognition. Two groups, those with CIs and those with NH listening to vocoded speech, were presented videos of unaccented and accented talkers with and without visual distractions, and with a secondary task. It was hypothesized that, compared with those with NH, listeners with CIs would be less influenced by visual distraction or a secondary visual task because their prolonged reliance on visual cues to aid auditory perception improves the ability to suppress irrelevant information. Results showed that visual distractions alone did not significantly decrease speech recognition performance for either group, but adding a secondary task did. Speech recognition was significantly poorer for accented compared with unaccented speech, and this difference was greater for CI listeners. These results suggest that speech recognition performance is likely more dependent on incoming signal characteristics than a difference in adaptive strategies for managing distractions between those who listen with and without a CI.


2014 ◽  
Vol 22 (4) ◽  
pp. 1048-1053 ◽  
Author(s):  
Nancy Tye-Murray ◽  
Brent P. Spehar ◽  
Joel Myerson ◽  
Sandra Hale ◽  
Mitchell S. Sommers

2016 ◽  
Vol 31 (4) ◽  
pp. 380-389 ◽  
Author(s):  
Nancy Tye-Murray ◽  
Brent Spehar ◽  
Joel Myerson ◽  
Sandra Hale ◽  
Mitchell Sommers

2003 ◽  
Vol 12 (1) ◽  
pp. 41-51 ◽  
Author(s):  
Paula Henry ◽  
Todd Ricketts

Improving the signal-to-noise ratio (SNR) for individuals with hearing loss who are listening to speech in noise provides an obvious benefit. Although binaural hearing provides the greatest advantage over monaural hearing in noise, some individuals with symmetrical hearing loss choose to wear only one hearing aid. The present study tested the hypothesis that individuals with symmetrical hearing loss fit with one hearing aid would demonstrate improved speech recognition in background noise with increases in head turn. Fourteen individuals were fit monaurally with a Starkey Gemini in-the-ear (ITE) hearing aid with directional and omnidirectional microphone modes. Speech recognition performance in noise was tested using the audiovisual version of the Connected Speech Test (CST v.3). The test was administered in auditory-only conditions as well as with the addition of visual cues for each of three head angles: 0°, 20°, and 40°. Results indicated improvement in speech recognition performance with changes in head angle for the auditory-only presentation mode at the 20° and 40° head angles when compared to 0°. Improvement in speech recognition performance for the auditory + visual mode was noted for the 20° head angle when compared to 0°. Additionally, a decrement in speech recognition performance for the auditory + visual mode was noted for the 40° head angle when compared to 0°. These results support a speech recognition advantage for listeners fit with one ITE hearing aid listening in a close listener-to-speaker distance when they turn their head slightly in order to increase signal intensity.


2005 ◽  
Vol 118 (3) ◽  
pp. 1962-1962 ◽  
Author(s):  
Rachael Holt ◽  
Karen Kirk ◽  
David Pisoni ◽  
Lisa Burckhartzmeyer ◽  
Anna Lin

Sign in / Sign up

Export Citation Format

Share Document