spoken word recognition
Recently Published Documents


TOTAL DOCUMENTS

677
(FIVE YEARS 112)

H-INDEX

56
(FIVE YEARS 2)

2022 ◽  
Vol 8 (1) ◽  
pp. 299-319
Author(s):  
Terrin N. Tamati ◽  
David B. Pisoni ◽  
Aaron C. Moberly

Cochlear implants (CIs) represent a significant engineering and medical milestone in the treatment of hearing loss for both adults and children. In this review, we provide a brief overview of CI technology, describe the benefits that CIs can provide to adults and children who receive them, and discuss the specific limitations and issues faced by CI users. We emphasize the relevance of CIs to the linguistics community by demonstrating how CIs successfully provide access to spoken language. Furthermore, CI research can inform our basic understanding of spoken word recognition in adults and spoken language development in children. Linguistics research can also help us address the major clinical issue of outcome variability and motivate the development of new clinical tools to assess the unique challenges of adults and children with CIs, as well as novel interventions for individuals with poor outcomes.


2022 ◽  
Vol 12 ◽  
Author(s):  
Ting Zou ◽  
Yutong Liu ◽  
Huiting Zhong

This study investigated the relative role of sub-syllabic components (initial consonant, rime, and tone) in spoken word recognition of Mandarin Chinese using an eye-tracking experiment with a visual world paradigm. Native Mandarin speakers (all born and grew up in Beijing) were presented with four pictures and an auditory stimulus. They were required to click the picture according to the sound stimulus they heard, and their eye movements were tracked during this process. For a target word (e.g., tang2 “candy”), nine conditions of competitors were constructed in terms of the amount of their phonological overlap with the target: consonant competitor (e.g., ti1 “ladder”), rime competitor (e.g., lang4 “wave”), tone competitor (e.g., niu2 “cow”), consonant plus rime competitor (e.g., tang1”soup”), consonant plus tone competitor (e.g., tou2 “head”), rime plus tone competitor (e.g., yang2 “sheep”), cohort competitor (e.g., ta3 “tower”), cohort plus tone competitor (e.g., tao2 “peach”), and baseline competitor (e.g., xue3 “snow”). A growth curve analysis was conducted with the fixation to competitors, targets, and distractors, and the results showed that (1) competitors with consonant or rime overlap can be adequately activated, while tone overlap plays a weaker role since additional tonal information can strengthen the competitive effect only when it was added to a candidate that already bears much phonological similarity with the target. (2) Mandarin words are processed in an incremental way in the time course of word recognition since different partially overlapping competitors could be activated immediately; (3) like the pattern found in English, both cohort and rime competitors were activated to compete for lexical activation, but these two competitors were not temporally distinctive and mainly differed in the size of their competitive effects. Generally, the gradation of activation based on the phonological similarity between target and candidates found in this study was in line with the continuous mapping models and may reflect a strategy of native speakers shaped by the informative characteristics of the interaction among different sub-syllabic components.


2021 ◽  
Author(s):  
Kelsey Klein ◽  
Elizabeth Walker ◽  
Bob McMurray

Objective: The objective of this study was to characterize the dynamics of real-time lexical access, including lexical competition among phonologically similar words, and semantic activation in school-age children with hearing aids (HAs) and children with cochlear implants (CIs). We hypothesized that developing spoken language via degraded auditory input would lead children with HAs or CIs to adapt their approach to spoken word recognition, especially by slowing down lexical access.Design: Participants were children ages 9-12 years old with normal hearing (NH), HAs, or CIs. Participants completed a Visual World Paradigm task in which they heard a spoken word and selected the matching picture from four options. Competitor items were either phonologically similar, semantically similar, or unrelated to the target word. As the target word unfolded, children’s fixations to the target word, cohort competitor, rhyme competitor, semantically related item, and unrelated item were recorded as indices of ongoing lexical and semantic activation.Results: Children with HAs and children with CIs showed slower fixations to the target, reduced fixations to the cohort, and increased fixations to the rhyme, relative to children with NH. This wait-and-see profile was more pronounced in the children with CIs than the children with HAs. Children with HAs and children with CIs also showed delayed fixations to the semantically related item, though this delay was attributable to their delay in activating words in general, not to a distinct semantic source.Conclusions: Children with HAs and children with CIs showed qualitatively similar patterns of real-time spoken word recognition. Findings suggest that developing spoken language via degraded auditory input causes long-term cognitive adaptations to how listeners recognize spoken words, regardless of the type of hearing device used. Delayed lexical activation directly led to delayed semantic activation in children with HAs and CIs. This delay in semantic processing may impact these children’s ability to understand connected speech in everyday life.


Author(s):  
Mona Roxana Botezatu ◽  
Judith F. Kroll ◽  
Morgan Trachsel ◽  
Taomei Guo

Abstract We investigated whether fluent language production is associated with greater skill in resolving lexical competition during spoken word recognition and ignoring irrelevant information in non-linguistic tasks. Native English monolinguals and native English L2 learners, who varied on measures of discourse/verbal fluency and cognitive control, identified spoken English words from dense (e.g., BAG) and sparse (e.g., BALL) phonological neighborhoods in moderate noise. Participants were slower in recognizing spoken words from denser neighborhoods. The inhibitory effect of phonological neighborhood density was smaller for English monolinguals and L2 learners with higher speech production fluency, but was unrelated to cognitive control as indexed by performance on the Simon task. Converging evidence from within-language effects in monolinguals and cross-language effects in L2 learners suggests that fluent language production involves a competitive selection process that may not engage all domain-general control mechanisms. Results suggest that language experience may capture individual variation in lexical competition resolution.


2021 ◽  
Vol 11 (12) ◽  
pp. 1628
Author(s):  
Michael S. Vitevitch ◽  
Gavin J. D. Mullin

Cognitive network science is an emerging approach that uses the mathematical tools of network science to map the relationships among representations stored in memory to examine how that structure might influence processing. In the present study, we used computer simulations to compare the ability of a well-known model of spoken word recognition, TRACE, to the ability of a cognitive network model with a spreading activation-like process to account for the findings from several previously published behavioral studies of language processing. In all four simulations, the TRACE model failed to retrieve a sufficient number of words to assess if it could replicate the behavioral findings. The cognitive network model successfully replicated the behavioral findings in Simulations 1 and 2. However, in Simulation 3a, the cognitive network did not replicate the behavioral findings, perhaps because an additional mechanism was not implemented in the model. However, in Simulation 3b, when the decay parameter in spreadr was manipulated to model this mechanism the cognitive network model successfully replicated the behavioral findings. The results suggest that models of cognition need to take into account the multi-scale structure that exists among representations in memory, and how that structure can influence processing.


2021 ◽  
Author(s):  
Katrina Sue McClannahan ◽  
Amelia Mainardi ◽  
Austin Luor ◽  
Yi-Fang Chiu ◽  
Mitchell S. Sommers ◽  
...  

BackgroundDifficulty understanding speech is a common complaint of older adults. In quiet, speech perception is often assumed to be relatively automatic. In background noise, however, higher-level cognitive processes play a more substantial role in successful communication. Cognitive resources are often limited in adults with dementia, which may therefore hamper word recognition. ObjectiveThe goal of this study was to determine the impact of mild dementia on spoken word recognition in quiet and noise.MethodsParticipants were adults aged 53–86 years with (n=16) or without (n=32) dementia symptoms as classified by a clinical dementia rating scale. Participants performed a word identification task with two levels of neighborhood density in quiet and in speech shaped noise at two signal-to-noise ratios (SNRs), +6 dB and +3 dB. Our hypothesis was that listeners with mild dementia would have more difficulty with speech perception in noise under conditions that tax cognitive resources. ResultsListeners with mild dementia had poorer speech perception accuracy in both quiet and noise, which held after accounting for differences in age and hearing level. Notably, even in quiet, adults with dementia symptoms correctly identified words only about 80% of the time. However, phonological neighborhood density was not a factor in the identification task performance for either group.ConclusionThese results affirm the difficulty that listeners with mild dementia have with spoken word recognition, both in quiet and in background noise, consistent with a key role of cognitive resources in spoken word identification. However, the impact of neighborhood density in these listeners is less clear.


2021 ◽  
Vol 12 ◽  
Author(s):  
Youxi Wang ◽  
Xuelian Zang ◽  
Hua Zhang ◽  
Wei Shen

In the current study, two experiments were conducted to investigate the processing of the second syllable (which was considered as the rhyme at the word level) during Chinese disyllabic spoken word recognition using a printed-word paradigm. In Experiment 1, participants heard a spoken target word and were simultaneously presented with a visual display of four printed words: a target word, a phonological competitor, and two unrelated distractors. The phonological competitors were manipulated to share either full phonemic overlap of the second syllable with targets (the syllabic overlap condition; e.g., 小篆, xiao3zhuan4, “calligraphy” vs. 公转, gong1zhuan4, “revolution”) or the initial phonemic overlap of the second syllable (the sub-syllabic overlap condition; e.g., 圆柱, yuan2zhu4, “cylinder” vs. 公转, gong1zhuan4, “revolution”) with targets. Participants were asked to select the target words and their eye movements were simultaneously recorded. The results did not show any phonological competition effect in either the syllabic overlap condition or the sub-syllabic overlap condition. In Experiment 2, to maximize the likelihood of observing the phonological competition effect, a target-absent version of the printed-word paradigm was adopted, in which target words were removed from the visual display. The results of Experiment 2 showed significant phonological competition effects in both conditions, i.e., more fixations were made to the phonological competitors than to the distractors. Moreover, the phonological competition effect was found to be larger in the syllabic overlap condition than in the sub-syllabic overlap condition. These findings shed light on the effect of the second syllable competition at the word level during spoken word recognition and, more importantly, showed that the initial phonemes of the second syllable at the syllabic level are also accessed during Chinese disyllabic spoken word recognition.


Sign in / Sign up

Export Citation Format

Share Document