scholarly journals Age-Related Differences in Auditory Cortex Activity During Spoken Word Recognition

2020 ◽  
Vol 1 (4) ◽  
pp. 452-473
Author(s):  
Chad S. Rogers ◽  
Michael S. Jones ◽  
Sarah McConkey ◽  
Brent Spehar ◽  
Kristin J. Van Engen ◽  
...  

Understanding spoken words requires the rapid matching of a complex acoustic stimulus with stored lexical representations. The degree to which brain networks supporting spoken word recognition are affected by adult aging remains poorly understood. In the current study we used fMRI to measure the brain responses to spoken words in two conditions: an attentive listening condition, in which no response was required, and a repetition task. Listeners were 29 young adults (aged 19–30 years) and 32 older adults (aged 65–81 years) without self-reported hearing difficulty. We found largely similar patterns of activity during word perception for both young and older adults, centered on the bilateral superior temporal gyrus. As expected, the repetition condition resulted in significantly more activity in areas related to motor planning and execution (including the premotor cortex and supplemental motor area) compared to the attentive listening condition. Importantly, however, older adults showed significantly less activity in probabilistically defined auditory cortex than young adults when listening to individual words in both the attentive listening and repetition tasks. Age differences in auditory cortex activity were seen selectively for words (no age differences were present for 1-channel vocoded speech, used as a control condition), and could not be easily explained by accuracy on the task, movement in the scanner, or hearing sensitivity (available on a subset of participants). These findings indicate largely similar patterns of brain activity for young and older adults when listening to words in quiet, but suggest less recruitment of auditory cortex by the older adults.

2020 ◽  
Author(s):  
Chad S. Rogers ◽  
Michael S. Jones ◽  
Sarah McConkey ◽  
Brent Spehar ◽  
Kristin J. Van Engen ◽  
...  

AbstractUnderstanding spoken words requires the rapid matching of a complex acoustic stimulus with stored lexical representations. The degree to which the brain networks supporting spoken word recognition are affected by adult aging remains poorly understood. In the current study we used fMRI to measure the brain responses to spoken words in two conditions: an attentive listening condition, in which no response was required, and a repetition task. Listeners were 29 young adults (aged 19–30 years) and 32 older adults (aged 65–81 years) without self-reported hearing difficulty. We found largely similar patterns of activity during word perception for both young and older adults, centered on bilateral superior temporal gyrus. As expected, the repetition condition resulted in significantly more activity in areas related to motor planning and execution (including premotor cortex and supplemental motor area) compared to the attentive listening condition. Importantly, however, older adults showed significantly less activity in probabilistically-defined auditory cortex than young adults when listening to individual words in both the attentive listening and repetition tasks. Age differences in auditory cortex activity were seen selectively for words (no age differences were present for 1-channel vocoded speech, used as a control condition), and could not be easily explained by accuracy on the task, movement in the scanner, or hearing sensitivity (available on a subset of participants). These findings indicate largely similar patterns of brain activity for young and older adults when listening to words in quiet, but suggest less recruitment of auditory cortex by the older adults.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Kristin J. Van Engen ◽  
Avanti Dey ◽  
Nichole Runge ◽  
Brent Spehar ◽  
Mitchell S. Sommers ◽  
...  

This study assessed the effects of age, word frequency, and background noise on the time course of lexical activation during spoken word recognition. Participants (41 young adults and 39 older adults) performed a visual world word recognition task while we monitored their gaze position. On each trial, four phonologically unrelated pictures appeared on the screen. A target word was presented auditorily following a carrier phrase (“Click on ________”), at which point participants were instructed to use the mouse to click on the picture that corresponded to the target word. High- and low-frequency words were presented in quiet to half of the participants. The other half heard the words in a low level of noise in which the words were still readily identifiable. Results showed that, even in the absence of phonological competitors in the visual array, high-frequency words were fixated more quickly than low-frequency words by both listener groups. Young adults were generally faster to fixate on targets compared to older adults, but the pattern of interactions among noise, word frequency, and listener age showed that older adults’ lexical activation largely matches that of young adults in a modest amount of noise.


2020 ◽  
Author(s):  
Kristin J. Van Engen ◽  
Avanti Dey ◽  
Nichole Runge ◽  
Brent Spehar ◽  
Mitchell S. Sommers ◽  
...  

This study assessed the effects of age, lexical frequency, and noise on the time course of lexical activation during spoken word recognition. Participants (41 young adults and 39 older adults) performed a visual world word recognition task while we monitored their gaze position. On each trial, four phonologically-unrelated pictures appeared on the screen. A target word was presented following a carrier phrase (“Click on the ________”), at which point participants were instructed to use the mouse to click on the picture that corresponded to the target word. High- and low-frequency words were presented in quiet and in noise at a signal-to-noise ratio (SNR) of +3 dB. Results show that, even in the absence of phonological competitors in the visual array, high-frequency words were fixated more quickly than low-frequency words by both listener groups. Young adults were generally faster to fixate on targets compared to older adults, but the pattern of interactions among noise, lexical frequency, and listener age show that the behavior of young adults in a small amount of noise largely matches older adult behavior.


1996 ◽  
Vol 39 (4) ◽  
pp. 724-733 ◽  
Author(s):  
Nancy B. Marshall ◽  
Linda W. Duke ◽  
Amanda C. Walley

This study investigated the effects of normal aging and Alzheimer's disease on listeners' ability to recognize gated spoken words. Groups of healthy young adults, healthy older adults, and adults with Alzheimer's disease were presented isolated gated spoken words. Theoretical predictions of the Cohort model of spoken word recognition (Marslen-Wilson, 1984) were tested, employing both between-group and within-group comparisons. The findings for the young adults supported the Cohort model's predictions. The findings for the older adult groups revealed different effects for age and disease. These results are interpreted in relation to the theoretical predictions, the findings of previous gating studies, and differentiating age from disease-related changes in spoken word recognition.


2021 ◽  
Author(s):  
Drew Jordan McLaughlin ◽  
Maggie Zink ◽  
Lauren Gaunt ◽  
Brent Spehar ◽  
Kristin J. Van Engen ◽  
...  

In most contemporary activation-competition frameworks for spoken word recognition, candidate words compete against phonological “neighbors” with similar acoustic properties (e.g., “cap” vs. “cat”). Thus, processing words with more competitors should come at a greater cognitive cost than processing words with fewer competitors, due to increased demands for selecting the correct item and inhibiting incorrect candidates. Importantly, these processes should operate even in the absence of differences in accuracy. In the present study, we tested this proposal by examining differences in processing costs associated with neighborhood density for highly intelligible items presented in the absence of noise. A second goal was to examine whether the cognitive demands associated with increased neighborhood density were greater for older adults compared with young adults. Using pupillometry as an index of cognitive processing load, we compared the cognitive demands associated with spoken word recognition for words from dense and sparse neighborhoods, presented in quiet, for young (n = 67) and older (n = 69) adult listeners. Growth curve analysis of the pupil data indicated that older adults showed a greater evoked pupil response for spoken words than do young adults, consistent with increased cognitive load during spoken word recognition. Words from dense neighborhoods were marginally more demanding to process than words from sparse neighborhoods. There was also an interaction between age and neighborhood density, indicating larger effects of density in young adult listeners. These results highlight the importance of assessing both cognitive demands and accuracy when investigating the mechanisms underlying spoken word recognition.


Author(s):  
Debra Titone ◽  
Julie Mercier ◽  
Aruna Sudarshan ◽  
Irina Pivneva ◽  
Jason Gullifer ◽  
...  

Abstract We investigated whether bilingual older adults experience within- and cross-language competition during spoken word recognition similarly to younger adults matched on age of second language (L2) acquisition, objective and subjective L2 proficiency, and current L2 exposure. In a visual world eye-tracking paradigm, older and younger adults, who were French-dominant or English-dominant English-French bilinguals, listened to English words, and looked at pictures including the target (field), a within-language competitor (feet) or cross-language (French) competitor (fille, “girl”), and unrelated filler pictures while their eye movements were monitored. Older adults showed evidence of greater within-language competition as a function of increased target and competitor phonological overlap. There was some evidence of age-related differences in cross-language competition, however, it was quite small overall and varied as a function of target language proficiency. These results suggest that greater within- and possibly cross-language lexical competition during spoken word recognition may underlie some of the communication difficulties encountered by healthy bilingual older adults.


2020 ◽  
pp. 026765832096825
Author(s):  
Jeong-Im Han ◽  
Song Yi Kim

The present study investigated the influence of orthographic input on the recognition of second language (L2) spoken words with phonological variants, when first language (L1) and L2 have different orthographic structures. Lexical encoding for intermediate-to-advanced level Mandarin learners of Korean was assessed using masked cross-modal and within-modal priming tasks. Given that Korean has obstruent nasalization in the syllable coda, prime target pairs were created with and without such phonological variants, but spellings that were provided in the cross-modal task reflected their unaltered, nonnasalized forms. The results indicate that when L2 learners are exposed to transparent alphabetic orthography, they do not show a particular cost for spoken word recognition of L2 phonological variants as long as the variation is regular and rule-governed.


Sign in / Sign up

Export Citation Format

Share Document