Rapid Gains in Speed of Verbal Processing by Infants in the 2nd Year

1998 ◽  
Vol 9 (3) ◽  
pp. 228-231 ◽  
Author(s):  
Anne Fernald ◽  
John P. Pinto ◽  
Daniel Swingley ◽  
Amy Weinberg ◽  
Gerald W. McRoberts

Infants improve substantially in language ability during their 2nd year. Research on the early development of speech production shows that vocabulary begins to expand rapidly around the age of 18 months. During this period, infants also make impressive gains in understanding spoken language. We examined the time course of word recognition in infants ages 15 to 24 months, tracking their eye movements as they looked at pictures in response to familiar spoken words. The speed and efficiency of verbal processing increased dramatically over the 2nd year. Although 15-month-old infants did not orient to the correct picture until after the target word was spoken, 24-month-olds were significantly faster, shifting their gaze to the correct picture before the end of the spoken word. By 2 years of age, children are progressing toward the highly efficient performance of adults, making decisions about words based on incomplete acoustic information.

Author(s):  
Christina Blomquist ◽  
Rochelle S. Newman ◽  
Yi Ting Huang ◽  
Jan Edwards

Purpose Children with cochlear implants (CIs) are more likely to struggle with spoken language than their age-matched peers with normal hearing (NH), and new language processing literature suggests that these challenges may be linked to delays in spoken word recognition. The purpose of this study was to investigate whether children with CIs use language knowledge via semantic prediction to facilitate recognition of upcoming words and help compensate for uncertainties in the acoustic signal. Method Five- to 10-year-old children with CIs heard sentences with an informative verb ( draws ) or a neutral verb ( gets ) preceding a target word ( picture ). The target referent was presented on a screen, along with a phonologically similar competitor ( pickle ). Children's eye gaze was recorded to quantify efficiency of access of the target word and suppression of phonological competition. Performance was compared to both an age-matched group and vocabulary-matched group of children with NH. Results Children with CIs, like their peers with NH, demonstrated use of informative verbs to look more quickly to the target word and look less to the phonological competitor. However, children with CIs demonstrated less efficient use of semantic cues relative to their peers with NH, even when matched for vocabulary ability. Conclusions Children with CIs use semantic prediction to facilitate spoken word recognition but do so to a lesser extent than children with NH. Children with CIs experience challenges in predictive spoken language processing above and beyond limitations from delayed vocabulary development. Children with CIs with better vocabulary ability demonstrate more efficient use of lexical-semantic cues. Clinical interventions focusing on building knowledge of words and their associations may support efficiency of spoken language processing for children with CIs. Supplemental Material https://doi.org/10.23641/asha.14417627


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Kristin J. Van Engen ◽  
Avanti Dey ◽  
Nichole Runge ◽  
Brent Spehar ◽  
Mitchell S. Sommers ◽  
...  

This study assessed the effects of age, word frequency, and background noise on the time course of lexical activation during spoken word recognition. Participants (41 young adults and 39 older adults) performed a visual world word recognition task while we monitored their gaze position. On each trial, four phonologically unrelated pictures appeared on the screen. A target word was presented auditorily following a carrier phrase (“Click on ________”), at which point participants were instructed to use the mouse to click on the picture that corresponded to the target word. High- and low-frequency words were presented in quiet to half of the participants. The other half heard the words in a low level of noise in which the words were still readily identifiable. Results showed that, even in the absence of phonological competitors in the visual array, high-frequency words were fixated more quickly than low-frequency words by both listener groups. Young adults were generally faster to fixate on targets compared to older adults, but the pattern of interactions among noise, word frequency, and listener age showed that older adults’ lexical activation largely matches that of young adults in a modest amount of noise.


2001 ◽  
Vol 42 (4) ◽  
pp. 317-367 ◽  
Author(s):  
Delphine Dahan ◽  
James S. Magnuson ◽  
Michael K. Tanenhaus

2009 ◽  
Vol 21 (1) ◽  
pp. 169-179 ◽  
Author(s):  
Chotiga Pattamadilok ◽  
Laetitia Perre ◽  
Stéphane Dufau ◽  
Johannes C. Ziegler

Literacy changes the way the brain processes spoken language. Most psycholinguists believe that orthographic effects on spoken language are either strategic or restricted to meta-phonological tasks. We used event-related brain potentials (ERPs) to investigate the locus and the time course of orthographic effects on spoken word recognition in a semantic task. Participants were asked to decide whether a given word belonged to a semantic category (body parts). On no-go trials, words were presented that were either orthographically consistent or inconsistent. Orthographic inconsistency (i.e., multiple spellings of the same phonology) could occur either in the first or the second syllable. The ERP data showed a clear orthographic consistency effect that preceded lexical access and semantic effects. Moreover, the onset of the orthographic consistency effect was time-locked to the arrival of the inconsistency in a spoken word, which suggests that orthography influences spoken language in a time-dependent manner. The present data join recent evidence from brain imaging showing orthographic activation in spoken language tasks. Our results extend those findings by showing that orthographic activation occurs early and affects spoken word recognition in a semantic task that does not require the explicit processing of orthographic or phonological structure.


2021 ◽  
Author(s):  
Florian Hintz ◽  
Cesko Voeten ◽  
James McQueen ◽  
Odette Scharenborg

Using the visual-word paradigm, the present study investigated the effects of word onset and offset masking on the time course of non-native spoken-word recognition in the presence of background noise. In two experiments, Dutch non-native listeners heard English target words, preceded by carrier sentences that were noise-free (Experiment 1) or contained intermittent noise (Experiment 2). Target words were either onset- or offset-masked or not masked at all. Results showed that onset masking delayed target word recognition more than offset masking did, suggesting that – similar to natives – non-native listeners strongly rely on word onset information during word recognition in noise.


2021 ◽  
Author(s):  
Kelsey Klein ◽  
Elizabeth Walker ◽  
Bob McMurray

Objective: The objective of this study was to characterize the dynamics of real-time lexical access, including lexical competition among phonologically similar words, and semantic activation in school-age children with hearing aids (HAs) and children with cochlear implants (CIs). We hypothesized that developing spoken language via degraded auditory input would lead children with HAs or CIs to adapt their approach to spoken word recognition, especially by slowing down lexical access.Design: Participants were children ages 9-12 years old with normal hearing (NH), HAs, or CIs. Participants completed a Visual World Paradigm task in which they heard a spoken word and selected the matching picture from four options. Competitor items were either phonologically similar, semantically similar, or unrelated to the target word. As the target word unfolded, children’s fixations to the target word, cohort competitor, rhyme competitor, semantically related item, and unrelated item were recorded as indices of ongoing lexical and semantic activation.Results: Children with HAs and children with CIs showed slower fixations to the target, reduced fixations to the cohort, and increased fixations to the rhyme, relative to children with NH. This wait-and-see profile was more pronounced in the children with CIs than the children with HAs. Children with HAs and children with CIs also showed delayed fixations to the semantically related item, though this delay was attributable to their delay in activating words in general, not to a distinct semantic source.Conclusions: Children with HAs and children with CIs showed qualitatively similar patterns of real-time spoken word recognition. Findings suggest that developing spoken language via degraded auditory input causes long-term cognitive adaptations to how listeners recognize spoken words, regardless of the type of hearing device used. Delayed lexical activation directly led to delayed semantic activation in children with HAs and CIs. This delay in semantic processing may impact these children’s ability to understand connected speech in everyday life.


2020 ◽  
Author(s):  
Kristin J. Van Engen ◽  
Avanti Dey ◽  
Nichole Runge ◽  
Brent Spehar ◽  
Mitchell S. Sommers ◽  
...  

This study assessed the effects of age, lexical frequency, and noise on the time course of lexical activation during spoken word recognition. Participants (41 young adults and 39 older adults) performed a visual world word recognition task while we monitored their gaze position. On each trial, four phonologically-unrelated pictures appeared on the screen. A target word was presented following a carrier phrase (“Click on the ________”), at which point participants were instructed to use the mouse to click on the picture that corresponded to the target word. High- and low-frequency words were presented in quiet and in noise at a signal-to-noise ratio (SNR) of +3 dB. Results show that, even in the absence of phonological competitors in the visual array, high-frequency words were fixated more quickly than low-frequency words by both listener groups. Young adults were generally faster to fixate on targets compared to older adults, but the pattern of interactions among noise, lexical frequency, and listener age show that the behavior of young adults in a small amount of noise largely matches older adult behavior.


2018 ◽  
Vol 61 (11) ◽  
pp. 2796-2803
Author(s):  
Wei Shen ◽  
Zhao Li ◽  
Xiuhong Tong

Purpose This study aimed to investigate the time course of meaning activation of the 2nd morpheme processing of compound words during Chinese spoken word recognition using eye tracking technique with the printed-word paradigm. Method In the printed-word paradigm, participants were instructed to listen to a spoken target word (e.g., “大方”, /da4fang1/, generous) while presented with a visual display composed of 3 words: a morphemic competitor (e.g., “圆形”, /yuan2xing2/, circle), which was semantically related to the 2nd morpheme (e.g., “方”, /fang1/, square) of the spoken target word; a whole-word competitor (e.g., “吝啬”, /lin4se4/, stingy), which was semantically related to the spoken target word at the whole-word level; and a distractor, which was semantically related to neither the morpheme or the whole target word. Participants were asked to respond whether the spoken target word was on the visual display or not, and their eye movements were recorded. Results The logit mixed-model analysis showed both the morphemic competitor and the whole-word competitor effects. Both the morphemic and whole-word competitors attracted more fixations than the distractor. More importantly, the 2nd-morphemic competitor effect occurred at a relatively later time window (i.e., 1000–1500 ms) compared with the whole-word competitor effect (i.e., 200–1000 ms). Conclusion Findings in this study suggest that semantic information of both the 2nd morpheme and the whole word of a compound was activated in spoken word recognition and that the meaning activation of the 2nd morpheme followed the activation of the whole word.


2012 ◽  
Vol 5 (1) ◽  
Author(s):  
Ramesh K. Mishra ◽  
Niharika Singh ◽  
Aparna Pandey ◽  
Falk Huettig

We investigated whether levels of reading ability attained through formal literacy are related to anticipatory language-mediated eye movements. Indian low and high literates listened to simple spoken sentences containing a target word (e.g., "door") while at the same time looking at a visual display of four objects (a target, i.e. the door, and three distractors). The spoken sentences were constructed in such a way that participants could use semantic, associative, and syntactic information from adjectives and particles (preceding the critical noun) to anticipate the visual target objects. High literates started to shift their eye gaze to the target objects well before target word onset. In the low literacy group this shift of eye gaze occurred only when the target noun (i.e. "door") was heard, more than a second later. Our findings suggest that formal literacy may be important for the fine-tuning of language-mediated anticipatory mechanisms, abilities which proficient language users can then exploit for other cognitive activities such as spoken language-mediated eye gaze. In the conclusion, we discuss three potential mechanisms of how reading acquisition and practice may contribute to the differences in predictive spoken language processing between low and high literates.


Sign in / Sign up

Export Citation Format

Share Document