scholarly journals Cognitive Predictors of Spoken Word Recognition in Children With and Without Developmental Language Disorders

2018 ◽  
Vol 61 (6) ◽  
pp. 1409-1425 ◽  
Author(s):  
Julia L. Evans ◽  
Ronald B. Gillam ◽  
James W. Montgomery

Purpose This study examined the influence of cognitive factors on spoken word recognition in children with developmental language disorder (DLD) and typically developing (TD) children. Method Participants included 234 children (aged 7;0–11;11 years;months), 117 with DLD and 117 TD children, propensity matched for age, gender, socioeconomic status, and maternal education. Children completed a series of standardized assessment measures, a forward gating task, a rapid automatic naming task, and a series of tasks designed to examine cognitive factors hypothesized to influence spoken word recognition including phonological working memory, updating, attention shifting, and interference inhibition. Results Spoken word recognition for both initial and final accept gate points did not differ for children with DLD and TD controls after controlling target word knowledge in both groups. The 2 groups also did not differ on measures of updating, attention switching, and interference inhibition. Despite the lack of difference on these measures, for children with DLD, attention shifting and interference inhibition were significant predictors of spoken word recognition, whereas updating and receptive vocabulary were significant predictors of speed of spoken word recognition for the children in the TD group. Conclusion Contrary to expectations, after controlling for target word knowledge, spoken word recognition did not differ for children with DLD and TD controls; however, the cognitive processing factors that influenced children's ability to recognize the target word in a stream of speech differed qualitatively for children with and without DLDs.

Author(s):  
Christina Blomquist ◽  
Rochelle S. Newman ◽  
Yi Ting Huang ◽  
Jan Edwards

Purpose Children with cochlear implants (CIs) are more likely to struggle with spoken language than their age-matched peers with normal hearing (NH), and new language processing literature suggests that these challenges may be linked to delays in spoken word recognition. The purpose of this study was to investigate whether children with CIs use language knowledge via semantic prediction to facilitate recognition of upcoming words and help compensate for uncertainties in the acoustic signal. Method Five- to 10-year-old children with CIs heard sentences with an informative verb ( draws ) or a neutral verb ( gets ) preceding a target word ( picture ). The target referent was presented on a screen, along with a phonologically similar competitor ( pickle ). Children's eye gaze was recorded to quantify efficiency of access of the target word and suppression of phonological competition. Performance was compared to both an age-matched group and vocabulary-matched group of children with NH. Results Children with CIs, like their peers with NH, demonstrated use of informative verbs to look more quickly to the target word and look less to the phonological competitor. However, children with CIs demonstrated less efficient use of semantic cues relative to their peers with NH, even when matched for vocabulary ability. Conclusions Children with CIs use semantic prediction to facilitate spoken word recognition but do so to a lesser extent than children with NH. Children with CIs experience challenges in predictive spoken language processing above and beyond limitations from delayed vocabulary development. Children with CIs with better vocabulary ability demonstrate more efficient use of lexical-semantic cues. Clinical interventions focusing on building knowledge of words and their associations may support efficiency of spoken language processing for children with CIs. Supplemental Material https://doi.org/10.23641/asha.14417627


Author(s):  
Cynthia G. Clopper ◽  
Janet B. Pierrehumbert ◽  
Terrin N. Tamati

AbstractLexical neighborhood density is a well-known factor affecting phonological categorization in spoken word recognition. The current study examined the interaction between lexical neighborhood density and dialect variation in spoken word recognition in noise. The stimulus materials were real English words produced in two regional American English dialects. To manipulate lexical neighborhood density, target words were selected so that predicted phonological confusions across dialects resulted in real English words in the word-competitor condition and did not result in real English words in the nonword-competitor condition. Word and vowel recognition performance were more accurate in the nonword-competitor condition than the word-competitor condition for both talker dialects. An examination of the responses to specific vowels revealed the role of dialect variation in eliciting this effect. When the predicted phonological confusions were real lexical neighbors, listeners could respond with either the target word or the confusable minimal pair, and were more likely than expected to produce a minimal pair differing from the target by one vowel. When the predicted phonological confusions were not real words, however, the listeners exhibited less lexical competition and responded with the target word or a minimal pair differing by one consonant.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Kristin J. Van Engen ◽  
Avanti Dey ◽  
Nichole Runge ◽  
Brent Spehar ◽  
Mitchell S. Sommers ◽  
...  

This study assessed the effects of age, word frequency, and background noise on the time course of lexical activation during spoken word recognition. Participants (41 young adults and 39 older adults) performed a visual world word recognition task while we monitored their gaze position. On each trial, four phonologically unrelated pictures appeared on the screen. A target word was presented auditorily following a carrier phrase (“Click on ________”), at which point participants were instructed to use the mouse to click on the picture that corresponded to the target word. High- and low-frequency words were presented in quiet to half of the participants. The other half heard the words in a low level of noise in which the words were still readily identifiable. Results showed that, even in the absence of phonological competitors in the visual array, high-frequency words were fixated more quickly than low-frequency words by both listener groups. Young adults were generally faster to fixate on targets compared to older adults, but the pattern of interactions among noise, word frequency, and listener age showed that older adults’ lexical activation largely matches that of young adults in a modest amount of noise.


2021 ◽  
Author(s):  
Florian Hintz ◽  
Cesko Voeten ◽  
James McQueen ◽  
Odette Scharenborg

Using the visual-word paradigm, the present study investigated the effects of word onset and offset masking on the time course of non-native spoken-word recognition in the presence of background noise. In two experiments, Dutch non-native listeners heard English target words, preceded by carrier sentences that were noise-free (Experiment 1) or contained intermittent noise (Experiment 2). Target words were either onset- or offset-masked or not masked at all. Results showed that onset masking delayed target word recognition more than offset masking did, suggesting that – similar to natives – non-native listeners strongly rely on word onset information during word recognition in noise.


2021 ◽  
Author(s):  
Kelsey Klein ◽  
Elizabeth Walker ◽  
Bob McMurray

Objective: The objective of this study was to characterize the dynamics of real-time lexical access, including lexical competition among phonologically similar words, and semantic activation in school-age children with hearing aids (HAs) and children with cochlear implants (CIs). We hypothesized that developing spoken language via degraded auditory input would lead children with HAs or CIs to adapt their approach to spoken word recognition, especially by slowing down lexical access.Design: Participants were children ages 9-12 years old with normal hearing (NH), HAs, or CIs. Participants completed a Visual World Paradigm task in which they heard a spoken word and selected the matching picture from four options. Competitor items were either phonologically similar, semantically similar, or unrelated to the target word. As the target word unfolded, children’s fixations to the target word, cohort competitor, rhyme competitor, semantically related item, and unrelated item were recorded as indices of ongoing lexical and semantic activation.Results: Children with HAs and children with CIs showed slower fixations to the target, reduced fixations to the cohort, and increased fixations to the rhyme, relative to children with NH. This wait-and-see profile was more pronounced in the children with CIs than the children with HAs. Children with HAs and children with CIs also showed delayed fixations to the semantically related item, though this delay was attributable to their delay in activating words in general, not to a distinct semantic source.Conclusions: Children with HAs and children with CIs showed qualitatively similar patterns of real-time spoken word recognition. Findings suggest that developing spoken language via degraded auditory input causes long-term cognitive adaptations to how listeners recognize spoken words, regardless of the type of hearing device used. Delayed lexical activation directly led to delayed semantic activation in children with HAs and CIs. This delay in semantic processing may impact these children’s ability to understand connected speech in everyday life.


2020 ◽  
Author(s):  
Kristin J. Van Engen ◽  
Avanti Dey ◽  
Nichole Runge ◽  
Brent Spehar ◽  
Mitchell S. Sommers ◽  
...  

This study assessed the effects of age, lexical frequency, and noise on the time course of lexical activation during spoken word recognition. Participants (41 young adults and 39 older adults) performed a visual world word recognition task while we monitored their gaze position. On each trial, four phonologically-unrelated pictures appeared on the screen. A target word was presented following a carrier phrase (“Click on the ________”), at which point participants were instructed to use the mouse to click on the picture that corresponded to the target word. High- and low-frequency words were presented in quiet and in noise at a signal-to-noise ratio (SNR) of +3 dB. Results show that, even in the absence of phonological competitors in the visual array, high-frequency words were fixated more quickly than low-frequency words by both listener groups. Young adults were generally faster to fixate on targets compared to older adults, but the pattern of interactions among noise, lexical frequency, and listener age show that the behavior of young adults in a small amount of noise largely matches older adult behavior.


2018 ◽  
Vol 40 (2) ◽  
pp. 351-372 ◽  
Author(s):  
SOPHIE DUFOUR ◽  
YU-YING CHUANG ◽  
NOËL NGUYEN

ABSTRACTIn two semantic priming experiments, this study examined how southern French speakers process the standard French [o] variant in closed syllables in comparison to their own variant [ɔ]. In Experiment 1, southern French speakers showed facilitation in the processing of the associated target word VIOLET whether the word prime mauve was pronounced by a standard French speaker ([mov]) or a southern French speaker ([mɔv]). More importantly, Experiment 1 has also revealed that words of type mauve, which are subject to dialectal variation, behave exactly in the same way as words of type gomme, which are pronounced with [ɔ] by both southern and standard French speakers, and for which we also found no modulation in the magnitude of the priming effect as a function of the dialect of the speaker. Experiment 2 replicated the priming effect found with the standard French variant [mov], and failed to show a priming effect with nonwords such as [mœv] that also differ from the southern French variant [mɔv] by only one phonetic feature. Our study thus provides further evidence for efficient processing of dialectal variants during spoken word recognition, even if these variants are not part of the speaker’s own productions.


2018 ◽  
Vol 61 (11) ◽  
pp. 2796-2803
Author(s):  
Wei Shen ◽  
Zhao Li ◽  
Xiuhong Tong

Purpose This study aimed to investigate the time course of meaning activation of the 2nd morpheme processing of compound words during Chinese spoken word recognition using eye tracking technique with the printed-word paradigm. Method In the printed-word paradigm, participants were instructed to listen to a spoken target word (e.g., “大方”, /da4fang1/, generous) while presented with a visual display composed of 3 words: a morphemic competitor (e.g., “圆形”, /yuan2xing2/, circle), which was semantically related to the 2nd morpheme (e.g., “方”, /fang1/, square) of the spoken target word; a whole-word competitor (e.g., “吝啬”, /lin4se4/, stingy), which was semantically related to the spoken target word at the whole-word level; and a distractor, which was semantically related to neither the morpheme or the whole target word. Participants were asked to respond whether the spoken target word was on the visual display or not, and their eye movements were recorded. Results The logit mixed-model analysis showed both the morphemic competitor and the whole-word competitor effects. Both the morphemic and whole-word competitors attracted more fixations than the distractor. More importantly, the 2nd-morphemic competitor effect occurred at a relatively later time window (i.e., 1000–1500 ms) compared with the whole-word competitor effect (i.e., 200–1000 ms). Conclusion Findings in this study suggest that semantic information of both the 2nd morpheme and the whole word of a compound was activated in spoken word recognition and that the meaning activation of the 2nd morpheme followed the activation of the whole word.


2021 ◽  
Vol 12 ◽  
Author(s):  
Youxi Wang ◽  
Xuelian Zang ◽  
Hua Zhang ◽  
Wei Shen

In the current study, two experiments were conducted to investigate the processing of the second syllable (which was considered as the rhyme at the word level) during Chinese disyllabic spoken word recognition using a printed-word paradigm. In Experiment 1, participants heard a spoken target word and were simultaneously presented with a visual display of four printed words: a target word, a phonological competitor, and two unrelated distractors. The phonological competitors were manipulated to share either full phonemic overlap of the second syllable with targets (the syllabic overlap condition; e.g., 小篆, xiao3zhuan4, “calligraphy” vs. 公转, gong1zhuan4, “revolution”) or the initial phonemic overlap of the second syllable (the sub-syllabic overlap condition; e.g., 圆柱, yuan2zhu4, “cylinder” vs. 公转, gong1zhuan4, “revolution”) with targets. Participants were asked to select the target words and their eye movements were simultaneously recorded. The results did not show any phonological competition effect in either the syllabic overlap condition or the sub-syllabic overlap condition. In Experiment 2, to maximize the likelihood of observing the phonological competition effect, a target-absent version of the printed-word paradigm was adopted, in which target words were removed from the visual display. The results of Experiment 2 showed significant phonological competition effects in both conditions, i.e., more fixations were made to the phonological competitors than to the distractors. Moreover, the phonological competition effect was found to be larger in the syllabic overlap condition than in the sub-syllabic overlap condition. These findings shed light on the effect of the second syllable competition at the word level during spoken word recognition and, more importantly, showed that the initial phonemes of the second syllable at the syllabic level are also accessed during Chinese disyllabic spoken word recognition.


2021 ◽  
Author(s):  
Drew Jordan McLaughlin ◽  
Maggie Zink ◽  
Lauren Gaunt ◽  
Brent Spehar ◽  
Kristin J. Van Engen ◽  
...  

In most contemporary activation-competition frameworks for spoken word recognition, candidate words compete against phonological “neighbors” with similar acoustic properties (e.g., “cap” vs. “cat”). Thus, processing words with more competitors should come at a greater cognitive cost than processing words with fewer competitors, due to increased demands for selecting the correct item and inhibiting incorrect candidates. Importantly, these processes should operate even in the absence of differences in accuracy. In the present study, we tested this proposal by examining differences in processing costs associated with neighborhood density for highly intelligible items presented in the absence of noise. A second goal was to examine whether the cognitive demands associated with increased neighborhood density were greater for older adults compared with young adults. Using pupillometry as an index of cognitive processing load, we compared the cognitive demands associated with spoken word recognition for words from dense and sparse neighborhoods, presented in quiet, for young (n = 67) and older (n = 69) adult listeners. Growth curve analysis of the pupil data indicated that older adults showed a greater evoked pupil response for spoken words than do young adults, consistent with increased cognitive load during spoken word recognition. Words from dense neighborhoods were marginally more demanding to process than words from sparse neighborhoods. There was also an interaction between age and neighborhood density, indicating larger effects of density in young adult listeners. These results highlight the importance of assessing both cognitive demands and accuracy when investigating the mechanisms underlying spoken word recognition.


Sign in / Sign up

Export Citation Format

Share Document