scholarly journals Contra assertions, feedback improves word recognition

2021 ◽  
Author(s):  
James Magnuson ◽  
Samantha Grubb ◽  
Anne Marie Crinnion ◽  
Sahil Luthra ◽  
Phoebe Gaston

Norris and Cutler (in press) revisit their arguments that (lexical-to-sublexical) feedback cannot improve word recognition performance, based on the assumption that feedback must boost signal and noise equally. They also argue that demonstrations that feedback improves performance (Magnuson, Mirman, Luthra, Strauss, & Harris, 2018) in the TRACE model of spoken word recognition (McClelland & Elman, 1986) were artifacts of converting activations to response probabilities. We first evaluate their claim that feedback in an interactive activation model must boost noise and signal equally. This is not true in a fully interactive activation model such as TRACE, where the feedback signal does not simply mirror the feedforward signal; it is instead shaped by joint probabilities over lexical patterns, and the dynamics of lateral inhibition. Thus, even under high levels of noise, lexical feedback will selectively boost signal more than noise. We demonstrate that feedback promotes faster word recognition and preserves accuracy under noise whether one uses raw activations or response probabilities. We then document that lexical feedback selectively boosts signal (i.e., lexically-coherent series of phonemes) more than noise by tracking sublexical (phoneme) activations under noise with and without feedback. Thus, feedback in a model like TRACE does improve word recognition, exactly by selective reinforcement of lexically-coherent signal. We conclude that whether lexical feedback is integral to human speech processing is an empirical question, and briefly review a growing body of work at behavioral and neural levels that is consistent with feedback and inconsistent with autonomous (non-feedback) architectures.

2018 ◽  
Vol 13 (3) ◽  
pp. 333-353
Author(s):  
Stéphan Tulkens ◽  
Dominiek Sandra ◽  
Walter Daelemans

Abstract An oft-cited shortcoming of Interactive Activation as a psychological model of word reading is that it lacks the ability to simultaneously represent words of different lengths. We present an implementation of the Interactive Activation model, which we call Metameric, that can simulate words of different lengths, and show that there is nothing inherent to Interactive Activation which prevents it from simultaneously representing multiple word lengths. We provide an in-depth analysis of which specific factors need to be present, and show that the inclusion of three specific adjustments, all of which have been published in various models before, lead to an Interactive Activation model which is fully capable of representing words of different lengths. Finally, we show that our implementation is fully capable of representing all words between 2 and 11 letters in length from the English Lexicon Project (31, 416 words) in a single model. Our implementation is completely open source, heavily optimized, and includes both command line and graphical user interfaces, but is also agnostic to specific input data or problems. It can therefore be used to simulate a myriad of other models, e.g., models of spoken word recognition. The implementation can be accessed at www.github.com/clips/metameric.


Author(s):  
Cynthia G. Clopper ◽  
Janet B. Pierrehumbert ◽  
Terrin N. Tamati

AbstractLexical neighborhood density is a well-known factor affecting phonological categorization in spoken word recognition. The current study examined the interaction between lexical neighborhood density and dialect variation in spoken word recognition in noise. The stimulus materials were real English words produced in two regional American English dialects. To manipulate lexical neighborhood density, target words were selected so that predicted phonological confusions across dialects resulted in real English words in the word-competitor condition and did not result in real English words in the nonword-competitor condition. Word and vowel recognition performance were more accurate in the nonword-competitor condition than the word-competitor condition for both talker dialects. An examination of the responses to specific vowels revealed the role of dialect variation in eliciting this effect. When the predicted phonological confusions were real lexical neighbors, listeners could respond with either the target word or the confusable minimal pair, and were more likely than expected to produce a minimal pair differing from the target by one vowel. When the predicted phonological confusions were not real words, however, the listeners exhibited less lexical competition and responded with the target word or a minimal pair differing by one consonant.


Author(s):  
Sahil Luthra ◽  
Monica Y. C. Li ◽  
Heejo You ◽  
Christian Brodbeck ◽  
James S. Magnuson

AbstractPervasive behavioral and neural evidence for predictive processing has led to claims that language processing depends upon predictive coding. Formally, predictive coding is a computational mechanism where only deviations from top-down expectations are passed between levels of representation. In many cognitive neuroscience studies, a reduction of signal for expected inputs is taken as being diagnostic of predictive coding. In the present work, we show that despite not explicitly implementing prediction, the TRACE model of speech perception exhibits this putative hallmark of predictive coding, with reductions in total lexical activation, total lexical feedback, and total phoneme activation when the input conforms to expectations. These findings may indicate that interactive activation is functionally equivalent or approximant to predictive coding or that caution is warranted in interpreting neural signal reduction as diagnostic of predictive coding.


2012 ◽  
Vol 23 (06) ◽  
pp. 464-475 ◽  
Author(s):  
Karen Iler Kirk ◽  
Lindsay Prusick ◽  
Brian French ◽  
Chad Gotch ◽  
Laurie S. Eisenberg ◽  
...  

Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization, and lexical discrimination that may contribute to individual variation in spoken word recognition performance are not assessed in conventional tests of this kind. In this article, we review our past and current research activities aimed at developing a series of new assessment tools designed to evaluate spoken word recognition in children who are deaf or hard of hearing. These measures are theoretically motivated by a current model of spoken word recognition and also incorporate “real-world” stimulus variability in the form of multiple talkers and presentation formats. The goal of this research is to enhance our ability to estimate real-world listening skills and to predict benefit from sensory aid use in children with varying degrees of hearing loss.


2000 ◽  
Vol 23 (3) ◽  
pp. 347-347
Author(s):  
Louisa M. Slowiaczek

Hesitations about accepting whole-heartedly Norris et al.'s suggestion to abandon feedback in speech processing models concern (1) whether accounting for all available data justifies additional layers of complexity in the model and (2) whether characterizing Merge as non- interactive is valid. Spoken word recognition studies support the nature of Merge's lexical level and suggest that phonemes should comprise the prelexical level.


2009 ◽  
Vol 37 (4) ◽  
pp. 817-840 ◽  
Author(s):  
VIRGINIA A. MARCHMAN ◽  
ANNE FERNALD ◽  
NEREYDA HURTADO

ABSTRACTResearch using online comprehension measures with monolingual children shows that speed and accuracy of spoken word recognition are correlated with lexical development. Here we examined speech processing efficiency in relation to vocabulary development in bilingual children learning both Spanish and English (n=26 ; 2 ; 6). Between-language associations were weak: vocabulary size in Spanish was uncorrelated with vocabulary in English, and children's facility in online comprehension in Spanish was unrelated to their facility in English. Instead, efficiency of online processing in one language was significantly related to vocabulary size in that language, after controlling for processing speed and vocabulary size in the other language. These links between efficiency of lexical access and vocabulary knowledge in bilinguals parallel those previously reported for Spanish and English monolinguals, suggesting that children's ability to abstract information from the input in building a working lexicon relates fundamentally to mechanisms underlying the construction of language.


2020 ◽  
Author(s):  
Priscila Borges

Word recognition performance is significantly affected by semantic diversity (SemD), acorpus-based measure that indexes the degree to which the contexts associated with a word are similar in meaning. Due to the prominence of SemD as a determinant of behaviour, it is important to understand its neural correlates, but these remain underexplored. To address thisgap, this study examines whether and how SemD information is reflected in alpha-beta power dynamics during spoken word recognition. Given previous evidence linking stronger alpha-beta power decreases to semantically richer words, high-SemD words were predicted to elicit stronger alpha-beta power decreases relative to low-SemD words. Electroencephalographic data were recorded while 13 older adults performed a word-picture verification task. Average alpha-beta (10–20 Hz) power around 400–600 ms post-word onset served as the dependentvariable in linear mixed models whose fixed effects included SemD and other psycholinguistic variables. Results showed that SemD was not a significant predictor whenposterior sites were considered. However, when anterior sites and a later time window were examined, a significant effect of SemD was found, with higher scores predicting stronger alpha-beta power decreases. Additional analyses on event-related potential responses around 300–500 ms post-stimulus showed no effects of SemD. These findings provide the first insights into the electrophysiological signature of SemD and corroborate previous reports of stronger alpha-beta power decreases when more lexical-semantic information needs to beretrieved from memory. The null results are discussed in view of a few methodologicalaspects, which could be explored in future studies.


2007 ◽  
Vol 34 (2) ◽  
pp. 227-249 ◽  
Author(s):  
NEREYDA HURTADO ◽  
VIRGINIA A. MARCHMAN ◽  
ANNE FERNALD

Research on the development of efficiency in spoken language understanding has focused largely on middle-class children learning English. Here we extend this research to Spanish-learning children (n=49; M=2;0; range=1;3–3;1) living in the USA in Latino families from primarily low socioeconomic backgrounds. Children looked at pictures of familiar objects while listening to speech naming one of the objects. Analyses of eye movements revealed developmental increases in the efficiency of speech processing. Older children and children with larger vocabularies were more efficient at processing spoken language as it unfolds in real time, as previously documented with English learners. Children whose mothers had less education tended to be slower and less accurate than children of comparable age and vocabulary size whose mothers had more schooling, consistent with previous findings of slower rates of language learning in children from disadvantaged backgrounds. These results add to the cross-linguistic literature on the development of spoken word recognition and to the study of the impact of socioeconomic status (SES) factors on early language development.


Sign in / Sign up

Export Citation Format

Share Document