Lexical Access in the Face of Degraded Speech: Adapting to Moment by Moment Uncertainty
Listeners often process speech in adverse conditions. One challenge is spectral degradation, where information is missing from the signal. Lexical competition dynamics change when processing degraded speech, but it is unclear why and how these changes occur. We ask if these changes are driven solely by the quality of the input from the auditory periphery, or if these changes are modulated by cognitive mechanisms. Across two experiments, we used the visual world paradigm to investigate changes in lexical processing. Listeners heard different levels of noise-vocoded speech (4- or 15-channel vocoding) and matched the auditory input to pictures of a target word and its phonological competitors. In Experiment 1 levels of vocoding were either blocked together consistently or randomly interleaved from trial-to-trial. Listeners in the blocked condition showed more differentiation between the two levels of vocoding; this suggests that some level of learning is in effect to adapt to the varying levels of uncertainty in the input. Exploratory analyses suggested that when less intelligible speech is processed there is a cost to switching processing modes. In Experiment 2 levels of vocoding were always randomly interleaved. A visual cue was added to inform listeners of the level of difficulty of the upcoming speech. This was enough to attenuate the effects of interleaving as well as the switch cost. These experiments support a role for central processing in dealing with degraded speech. Listeners may be actively forming expectations about the level of degradation they will encounter and altering the dynamics of lexical access.