scholarly journals The neural dynamics of semantic diversity in spoken word recognition: The role of alpha-beta power

2020 ◽  
Author(s):  
Priscila Borges

Word recognition performance is significantly affected by semantic diversity (SemD), acorpus-based measure that indexes the degree to which the contexts associated with a word are similar in meaning. Due to the prominence of SemD as a determinant of behaviour, it is important to understand its neural correlates, but these remain underexplored. To address thisgap, this study examines whether and how SemD information is reflected in alpha-beta power dynamics during spoken word recognition. Given previous evidence linking stronger alpha-beta power decreases to semantically richer words, high-SemD words were predicted to elicit stronger alpha-beta power decreases relative to low-SemD words. Electroencephalographic data were recorded while 13 older adults performed a word-picture verification task. Average alpha-beta (10–20 Hz) power around 400–600 ms post-word onset served as the dependentvariable in linear mixed models whose fixed effects included SemD and other psycholinguistic variables. Results showed that SemD was not a significant predictor whenposterior sites were considered. However, when anterior sites and a later time window were examined, a significant effect of SemD was found, with higher scores predicting stronger alpha-beta power decreases. Additional analyses on event-related potential responses around 300–500 ms post-stimulus showed no effects of SemD. These findings provide the first insights into the electrophysiological signature of SemD and corroborate previous reports of stronger alpha-beta power decreases when more lexical-semantic information needs to beretrieved from memory. The null results are discussed in view of a few methodologicalaspects, which could be explored in future studies.

2020 ◽  
Vol 34 (2) ◽  
pp. 69-80
Author(s):  
Xian-Jun Huang ◽  
Kun lv ◽  
Jin-Chen Yang

Abstract. Word-initial phonological mismatches during spoken word recognition often elicit an event-related potential (ERP) component, namely, the phonological mapping negativity (PMN) in cross-modal priming studies or studies using sentence as context. However, recent studies also reported that a phonological P2 but not PMN has been observed for Mandarin Chinese spoken word recognition in unimodal word-matching and meaning-matching experiments, that is, both the prime and target words were presented auditorily. In the present study, the same pairs of disyllabic Mandarin Chinese words as in the prior unimodal studies were used as stimuli to investigate whether or not the phonological P2 effect is modulated by prime modality and can be replicated in a cross-modal design (i.e., written primes followed by spoken targets). Both the phonological and semantic relations between primes and targets were manipulated. Participants were instructed to judge whether the meaning of the two words were same or not. An enhanced PMN between 250 and 320 ms was elicited by word-initial phonological mismatches. In the later time window, centro-parietally distributed early N400 and late N400 were elicited in semantically unrelated conditions. The presence of PMN instead of P2 in the current study implies that ERP markers of word-initial phonological mismatches during spoken word recognition are modulated by the modality of primes at the level of phonological analysis.


Author(s):  
Cynthia G. Clopper ◽  
Janet B. Pierrehumbert ◽  
Terrin N. Tamati

AbstractLexical neighborhood density is a well-known factor affecting phonological categorization in spoken word recognition. The current study examined the interaction between lexical neighborhood density and dialect variation in spoken word recognition in noise. The stimulus materials were real English words produced in two regional American English dialects. To manipulate lexical neighborhood density, target words were selected so that predicted phonological confusions across dialects resulted in real English words in the word-competitor condition and did not result in real English words in the nonword-competitor condition. Word and vowel recognition performance were more accurate in the nonword-competitor condition than the word-competitor condition for both talker dialects. An examination of the responses to specific vowels revealed the role of dialect variation in eliciting this effect. When the predicted phonological confusions were real lexical neighbors, listeners could respond with either the target word or the confusable minimal pair, and were more likely than expected to produce a minimal pair differing from the target by one vowel. When the predicted phonological confusions were not real words, however, the listeners exhibited less lexical competition and responded with the target word or a minimal pair differing by one consonant.


2012 ◽  
Vol 23 (06) ◽  
pp. 464-475 ◽  
Author(s):  
Karen Iler Kirk ◽  
Lindsay Prusick ◽  
Brian French ◽  
Chad Gotch ◽  
Laurie S. Eisenberg ◽  
...  

Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization, and lexical discrimination that may contribute to individual variation in spoken word recognition performance are not assessed in conventional tests of this kind. In this article, we review our past and current research activities aimed at developing a series of new assessment tools designed to evaluate spoken word recognition in children who are deaf or hard of hearing. These measures are theoretically motivated by a current model of spoken word recognition and also incorporate “real-world” stimulus variability in the form of multiple talkers and presentation formats. The goal of this research is to enhance our ability to estimate real-world listening skills and to predict benefit from sensory aid use in children with varying degrees of hearing loss.


2018 ◽  
Vol 61 (11) ◽  
pp. 2796-2803
Author(s):  
Wei Shen ◽  
Zhao Li ◽  
Xiuhong Tong

Purpose This study aimed to investigate the time course of meaning activation of the 2nd morpheme processing of compound words during Chinese spoken word recognition using eye tracking technique with the printed-word paradigm. Method In the printed-word paradigm, participants were instructed to listen to a spoken target word (e.g., “大方”, /da4fang1/, generous) while presented with a visual display composed of 3 words: a morphemic competitor (e.g., “圆形”, /yuan2xing2/, circle), which was semantically related to the 2nd morpheme (e.g., “方”, /fang1/, square) of the spoken target word; a whole-word competitor (e.g., “吝啬”, /lin4se4/, stingy), which was semantically related to the spoken target word at the whole-word level; and a distractor, which was semantically related to neither the morpheme or the whole target word. Participants were asked to respond whether the spoken target word was on the visual display or not, and their eye movements were recorded. Results The logit mixed-model analysis showed both the morphemic competitor and the whole-word competitor effects. Both the morphemic and whole-word competitors attracted more fixations than the distractor. More importantly, the 2nd-morphemic competitor effect occurred at a relatively later time window (i.e., 1000–1500 ms) compared with the whole-word competitor effect (i.e., 200–1000 ms). Conclusion Findings in this study suggest that semantic information of both the 2nd morpheme and the whole word of a compound was activated in spoken word recognition and that the meaning activation of the 2nd morpheme followed the activation of the whole word.


2021 ◽  
Author(s):  
James Magnuson ◽  
Samantha Grubb ◽  
Anne Marie Crinnion ◽  
Sahil Luthra ◽  
Phoebe Gaston

Norris and Cutler (in press) revisit their arguments that (lexical-to-sublexical) feedback cannot improve word recognition performance, based on the assumption that feedback must boost signal and noise equally. They also argue that demonstrations that feedback improves performance (Magnuson, Mirman, Luthra, Strauss, & Harris, 2018) in the TRACE model of spoken word recognition (McClelland & Elman, 1986) were artifacts of converting activations to response probabilities. We first evaluate their claim that feedback in an interactive activation model must boost noise and signal equally. This is not true in a fully interactive activation model such as TRACE, where the feedback signal does not simply mirror the feedforward signal; it is instead shaped by joint probabilities over lexical patterns, and the dynamics of lateral inhibition. Thus, even under high levels of noise, lexical feedback will selectively boost signal more than noise. We demonstrate that feedback promotes faster word recognition and preserves accuracy under noise whether one uses raw activations or response probabilities. We then document that lexical feedback selectively boosts signal (i.e., lexically-coherent series of phonemes) more than noise by tracking sublexical (phoneme) activations under noise with and without feedback. Thus, feedback in a model like TRACE does improve word recognition, exactly by selective reinforcement of lexically-coherent signal. We conclude that whether lexical feedback is integral to human speech processing is an empirical question, and briefly review a growing body of work at behavioral and neural levels that is consistent with feedback and inconsistent with autonomous (non-feedback) architectures.


Sign in / Sign up

Export Citation Format

Share Document