scholarly journals Effects of Age, Word Frequency, and Noise on the Time Course of Spoken Word Recognition

2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Kristin J. Van Engen ◽  
Avanti Dey ◽  
Nichole Runge ◽  
Brent Spehar ◽  
Mitchell S. Sommers ◽  
...  

This study assessed the effects of age, word frequency, and background noise on the time course of lexical activation during spoken word recognition. Participants (41 young adults and 39 older adults) performed a visual world word recognition task while we monitored their gaze position. On each trial, four phonologically unrelated pictures appeared on the screen. A target word was presented auditorily following a carrier phrase (“Click on ________”), at which point participants were instructed to use the mouse to click on the picture that corresponded to the target word. High- and low-frequency words were presented in quiet to half of the participants. The other half heard the words in a low level of noise in which the words were still readily identifiable. Results showed that, even in the absence of phonological competitors in the visual array, high-frequency words were fixated more quickly than low-frequency words by both listener groups. Young adults were generally faster to fixate on targets compared to older adults, but the pattern of interactions among noise, word frequency, and listener age showed that older adults’ lexical activation largely matches that of young adults in a modest amount of noise.

2020 ◽  
Author(s):  
Kristin J. Van Engen ◽  
Avanti Dey ◽  
Nichole Runge ◽  
Brent Spehar ◽  
Mitchell S. Sommers ◽  
...  

This study assessed the effects of age, lexical frequency, and noise on the time course of lexical activation during spoken word recognition. Participants (41 young adults and 39 older adults) performed a visual world word recognition task while we monitored their gaze position. On each trial, four phonologically-unrelated pictures appeared on the screen. A target word was presented following a carrier phrase (“Click on the ________”), at which point participants were instructed to use the mouse to click on the picture that corresponded to the target word. High- and low-frequency words were presented in quiet and in noise at a signal-to-noise ratio (SNR) of +3 dB. Results show that, even in the absence of phonological competitors in the visual array, high-frequency words were fixated more quickly than low-frequency words by both listener groups. Young adults were generally faster to fixate on targets compared to older adults, but the pattern of interactions among noise, lexical frequency, and listener age show that the behavior of young adults in a small amount of noise largely matches older adult behavior.


2021 ◽  
Author(s):  
Florian Hintz ◽  
Cesko Voeten ◽  
James McQueen ◽  
Odette Scharenborg

Using the visual-word paradigm, the present study investigated the effects of word onset and offset masking on the time course of non-native spoken-word recognition in the presence of background noise. In two experiments, Dutch non-native listeners heard English target words, preceded by carrier sentences that were noise-free (Experiment 1) or contained intermittent noise (Experiment 2). Target words were either onset- or offset-masked or not masked at all. Results showed that onset masking delayed target word recognition more than offset masking did, suggesting that – similar to natives – non-native listeners strongly rely on word onset information during word recognition in noise.


2018 ◽  
Vol 61 (11) ◽  
pp. 2796-2803
Author(s):  
Wei Shen ◽  
Zhao Li ◽  
Xiuhong Tong

Purpose This study aimed to investigate the time course of meaning activation of the 2nd morpheme processing of compound words during Chinese spoken word recognition using eye tracking technique with the printed-word paradigm. Method In the printed-word paradigm, participants were instructed to listen to a spoken target word (e.g., “大方”, /da4fang1/, generous) while presented with a visual display composed of 3 words: a morphemic competitor (e.g., “圆形”, /yuan2xing2/, circle), which was semantically related to the 2nd morpheme (e.g., “方”, /fang1/, square) of the spoken target word; a whole-word competitor (e.g., “吝啬”, /lin4se4/, stingy), which was semantically related to the spoken target word at the whole-word level; and a distractor, which was semantically related to neither the morpheme or the whole target word. Participants were asked to respond whether the spoken target word was on the visual display or not, and their eye movements were recorded. Results The logit mixed-model analysis showed both the morphemic competitor and the whole-word competitor effects. Both the morphemic and whole-word competitors attracted more fixations than the distractor. More importantly, the 2nd-morphemic competitor effect occurred at a relatively later time window (i.e., 1000–1500 ms) compared with the whole-word competitor effect (i.e., 200–1000 ms). Conclusion Findings in this study suggest that semantic information of both the 2nd morpheme and the whole word of a compound was activated in spoken word recognition and that the meaning activation of the 2nd morpheme followed the activation of the whole word.


2021 ◽  
Author(s):  
Drew Jordan McLaughlin ◽  
Maggie Zink ◽  
Lauren Gaunt ◽  
Brent Spehar ◽  
Kristin J. Van Engen ◽  
...  

In most contemporary activation-competition frameworks for spoken word recognition, candidate words compete against phonological “neighbors” with similar acoustic properties (e.g., “cap” vs. “cat”). Thus, processing words with more competitors should come at a greater cognitive cost than processing words with fewer competitors, due to increased demands for selecting the correct item and inhibiting incorrect candidates. Importantly, these processes should operate even in the absence of differences in accuracy. In the present study, we tested this proposal by examining differences in processing costs associated with neighborhood density for highly intelligible items presented in the absence of noise. A second goal was to examine whether the cognitive demands associated with increased neighborhood density were greater for older adults compared with young adults. Using pupillometry as an index of cognitive processing load, we compared the cognitive demands associated with spoken word recognition for words from dense and sparse neighborhoods, presented in quiet, for young (n = 67) and older (n = 69) adult listeners. Growth curve analysis of the pupil data indicated that older adults showed a greater evoked pupil response for spoken words than do young adults, consistent with increased cognitive load during spoken word recognition. Words from dense neighborhoods were marginally more demanding to process than words from sparse neighborhoods. There was also an interaction between age and neighborhood density, indicating larger effects of density in young adult listeners. These results highlight the importance of assessing both cognitive demands and accuracy when investigating the mechanisms underlying spoken word recognition.


2020 ◽  
Author(s):  
Chad S. Rogers ◽  
Michael S. Jones ◽  
Sarah McConkey ◽  
Brent Spehar ◽  
Kristin J. Van Engen ◽  
...  

AbstractUnderstanding spoken words requires the rapid matching of a complex acoustic stimulus with stored lexical representations. The degree to which the brain networks supporting spoken word recognition are affected by adult aging remains poorly understood. In the current study we used fMRI to measure the brain responses to spoken words in two conditions: an attentive listening condition, in which no response was required, and a repetition task. Listeners were 29 young adults (aged 19–30 years) and 32 older adults (aged 65–81 years) without self-reported hearing difficulty. We found largely similar patterns of activity during word perception for both young and older adults, centered on bilateral superior temporal gyrus. As expected, the repetition condition resulted in significantly more activity in areas related to motor planning and execution (including premotor cortex and supplemental motor area) compared to the attentive listening condition. Importantly, however, older adults showed significantly less activity in probabilistically-defined auditory cortex than young adults when listening to individual words in both the attentive listening and repetition tasks. Age differences in auditory cortex activity were seen selectively for words (no age differences were present for 1-channel vocoded speech, used as a control condition), and could not be easily explained by accuracy on the task, movement in the scanner, or hearing sensitivity (available on a subset of participants). These findings indicate largely similar patterns of brain activity for young and older adults when listening to words in quiet, but suggest less recruitment of auditory cortex by the older adults.


2020 ◽  
Vol 1 (4) ◽  
pp. 452-473
Author(s):  
Chad S. Rogers ◽  
Michael S. Jones ◽  
Sarah McConkey ◽  
Brent Spehar ◽  
Kristin J. Van Engen ◽  
...  

Understanding spoken words requires the rapid matching of a complex acoustic stimulus with stored lexical representations. The degree to which brain networks supporting spoken word recognition are affected by adult aging remains poorly understood. In the current study we used fMRI to measure the brain responses to spoken words in two conditions: an attentive listening condition, in which no response was required, and a repetition task. Listeners were 29 young adults (aged 19–30 years) and 32 older adults (aged 65–81 years) without self-reported hearing difficulty. We found largely similar patterns of activity during word perception for both young and older adults, centered on the bilateral superior temporal gyrus. As expected, the repetition condition resulted in significantly more activity in areas related to motor planning and execution (including the premotor cortex and supplemental motor area) compared to the attentive listening condition. Importantly, however, older adults showed significantly less activity in probabilistically defined auditory cortex than young adults when listening to individual words in both the attentive listening and repetition tasks. Age differences in auditory cortex activity were seen selectively for words (no age differences were present for 1-channel vocoded speech, used as a control condition), and could not be easily explained by accuracy on the task, movement in the scanner, or hearing sensitivity (available on a subset of participants). These findings indicate largely similar patterns of brain activity for young and older adults when listening to words in quiet, but suggest less recruitment of auditory cortex by the older adults.


1997 ◽  
Author(s):  
Paul D. Allopenna ◽  
James S. Magnuson ◽  
Michael K. Tanenhaus

2018 ◽  
Vol 61 (6) ◽  
pp. 1409-1425 ◽  
Author(s):  
Julia L. Evans ◽  
Ronald B. Gillam ◽  
James W. Montgomery

Purpose This study examined the influence of cognitive factors on spoken word recognition in children with developmental language disorder (DLD) and typically developing (TD) children. Method Participants included 234 children (aged 7;0–11;11 years;months), 117 with DLD and 117 TD children, propensity matched for age, gender, socioeconomic status, and maternal education. Children completed a series of standardized assessment measures, a forward gating task, a rapid automatic naming task, and a series of tasks designed to examine cognitive factors hypothesized to influence spoken word recognition including phonological working memory, updating, attention shifting, and interference inhibition. Results Spoken word recognition for both initial and final accept gate points did not differ for children with DLD and TD controls after controlling target word knowledge in both groups. The 2 groups also did not differ on measures of updating, attention switching, and interference inhibition. Despite the lack of difference on these measures, for children with DLD, attention shifting and interference inhibition were significant predictors of spoken word recognition, whereas updating and receptive vocabulary were significant predictors of speed of spoken word recognition for the children in the TD group. Conclusion Contrary to expectations, after controlling for target word knowledge, spoken word recognition did not differ for children with DLD and TD controls; however, the cognitive processing factors that influenced children's ability to recognize the target word in a stream of speech differed qualitatively for children with and without DLDs.


Sign in / Sign up

Export Citation Format

Share Document