Time course of word identification and semantic integration in spoken language.

Author(s):  
Cyma Van Petten ◽  
Seana Coulson ◽  
Susan Rubin ◽  
Elena Plante ◽  
Marjorie Parks
2000 ◽  
Vol 108 (5) ◽  
pp. 2643-2643
Author(s):  
Cyma Van Petten ◽  
Susan Rubin ◽  
Marjorie Parks ◽  
Elena Plante ◽  
Seana Coulson

2007 ◽  
Vol 363 (1493) ◽  
pp. 1055-1069 ◽  
Author(s):  
Peter Hagoort

This paper focuses on what electrical and magnetic recordings of human brain activity reveal about spoken language understanding. Based on the high temporal resolution of these recordings, a fine-grained temporal profile of different aspects of spoken language comprehension can be obtained. Crucial aspects of speech comprehension are lexical access, selection and semantic integration. Results show that for words spoken in context, there is no ‘magic moment’ when lexical selection ends and semantic integration begins. Irrespective of whether words have early or late recognition points, semantic integration processing is initiated before words can be identified on the basis of the acoustic information alone. Moreover, for one particular event-related brain potential (ERP) component (the N400), equivalent impact of sentence- and discourse-semantic contexts is observed. This indicates that in comprehension, a spoken word is immediately evaluated relative to the widest interpretive domain available. In addition, this happens very quickly. Findings are discussed that show that often an unfolding word can be mapped onto discourse-level representations well before the end of the word. Overall, the time course of the ERP effects is compatible with the view that the different information types (lexical, syntactic, phonological, pragmatic) are processed in parallel and influence the interpretation process incrementally, that is as soon as the relevant pieces of information are available. This is referred to as the immediacy principle.


2004 ◽  
Vol 16 (7) ◽  
pp. 1272-1288 ◽  
Author(s):  
Nicole Y. Y. Wicha ◽  
Eva M. Moreno ◽  
Marta Kutas

Recent studies indicate that the human brain attends to and uses grammatical gender cues during sentence comprehension. Here, we examine the nature and time course of the effect of gender on word-by-word sentence reading. Event related brain potentials were recorded to an article and noun, while native Spanish speakers read medium to high-constraint Spanish sentences for comprehension. The noun either fit the sentence meaning or not, and matched the preceding article in gender or not; in addition, the preceding article was either expected or unexpected based on prior sentence context. Semantically anomalous nouns elicited an N400. Gender disagreeing nouns elicited a posterior late positivity (P600), replicating previous findings for words. Gender agreement and semantic congruity interacted in both the N400 window—with a larger negativity frontally for double violations—and the P600 window—with a larger positivity for semantic anomalies, relative to the prestimulus baseline. Finally, unexpected articles elicited an enhanced positivity (500–700 msec post onset) relative to expected articles. Overall, our data indicate that readers anticipate and attend to the gender of both articles and nouns, and use gender in real time to maintain agreement and to build sentence meaning.


2008 ◽  
Vol 20 (7) ◽  
pp. 1235-1249 ◽  
Author(s):  
Roel M. Willems ◽  
Aslı Özyürek ◽  
Peter Hagoort

Understanding language always occurs within a situational context and, therefore, often implies combining streams of information from different domains and modalities. One such combination is that of spoken language and visual information, which are perceived together in a variety of ways during everyday communication. Here we investigate whether and how words and pictures differ in terms of their neural correlates when they are integrated into a previously built-up sentence context. This is assessed in two experiments looking at the time course (measuring event-related potentials, ERPs) and the locus (using functional magnetic resonance imaging, fMRI) of this integration process. We manipulated the ease of semantic integration of word and/or picture to a previous sentence context to increase the semantic load of processing. In the ERP study, an increased semantic load led to an N400 effect which was similar for pictures and words in terms of latency and amplitude. In the fMRI study, we found overlapping activations to both picture and word integration in the left inferior frontal cortex. Specific activations for the integration of a word were observed in the left superior temporal cortex. We conclude that despite obvious differences in representational format, semantic information coming from pictures and words is integrated into a sentence context in similar ways in the brain. This study adds to the growing insight that the language system incorporates (semantic) information coming from linguistic and extralinguistic domains with the same neural time course and by recruitment of overlapping brain areas.


Author(s):  
Madison S. Buntrock ◽  
Brittan A. Barker ◽  
Madison M. Gurries ◽  
Tyson S. Barrett

Abstract. The familiar talker advantage is the finding that a listener’s ability to perceive and understand a talker is facilitated when the listener is familiar with the talker. However, it is unclear when the benefits of familiarity emerge and whether they strengthen over time. To better understand the time course of the familiar talker advantage, we assessed the effects of long-term, implicit voice learning on 89 young adults’ sentence recognition accuracy in the presence of four-talker babble. A university professor served as the target talker in the experiment. Half the participants were students of the professor and familiar with her voice. The professor was a stranger to the remaining participants. We manipulated the listeners’ degree of familiarity with the professor over the course of a semester. We used mixed effects modeling to test for the effects of the two independent variables: talker and hours of exposure. Analyses revealed a familiar talker advantage in the listeners after 16 weeks (∼32 h) of exposure to the target voice. These results imply that talker familiarity (outside of the confines of a long-term, familial relationship) seems to be a much quicker-to-emerge, reliable cue for bootstrapping spoken language perception than previous literature suggested.


2021 ◽  
Vol 12 ◽  
Author(s):  
Elif Canseza Kaplan ◽  
Anita E. Wagner ◽  
Paolo Toffanin ◽  
Deniz Başkent

Earlier studies have shown that musically trained individuals may have a benefit in adverse listening situations when compared to non-musicians, especially in speech-on-speech perception. However, the literature provides mostly conflicting results. In the current study, by employing different measures of spoken language processing, we aimed to test whether we could capture potential differences between musicians and non-musicians in speech-on-speech processing. We used an offline measure of speech perception (sentence recall task), which reveals a post-task response, and online measures of real time spoken language processing: gaze-tracking and pupillometry. We used stimuli of comparable complexity across both paradigms and tested the same groups of participants. In the sentence recall task, musicians recalled more words correctly than non-musicians. In the eye-tracking experiment, both groups showed reduced fixations to the target and competitor words’ images as the level of speech maskers increased. The time course of gaze fixations to the competitor did not differ between groups in the speech-in-quiet condition, while the time course dynamics did differ between groups as the two-talker masker was added to the target signal. As the level of two-talker masker increased, musicians showed reduced lexical competition as indicated by the gaze fixations to the competitor. The pupil dilation data showed differences mainly in one target-to-masker ratio. This does not allow to draw conclusions regarding potential differences in the use of cognitive resources between groups. Overall, the eye-tracking measure enabled us to observe that musicians may be using a different strategy than non-musicians to attain spoken word recognition as the noise level increased. However, further investigation with more fine-grained alignment between the processes captured by online and offline measures is necessary to establish whether musicians differ due to better cognitive control or sound processing.


2020 ◽  
Vol 5 (6) ◽  
pp. 1380-1387 ◽  
Author(s):  
Elaine R. Smolen ◽  
Maria C. Hartman ◽  
Ye Wang

Purpose This longitudinal study explored the reading achievement of children with hearing loss who used listening and spoken language and examined their progress across reading domains over 1 year. Method Sixty-four children with hearing loss enrolled in prekindergarten through third grade in a large listening and spoken language program in the Southwest United States participated. Eight subtests of the Woodcock-Johnson IV Tests of Achievement were administered, and demographic information was collected. The same subtests were administered to 53 of the participants 1 year later. Results The mean subtest standard scores for participants in this study were all within the average range. Participants demonstrated relative strengths in basic reading skills, such as spelling, word and nonword reading, and comprehension of short passages. Relative weaknesses were found in the areas of oral reading and word- and sentence-reading fluency. When the participants were again assessed 1 year later, significant growth was found in their letter–word identification, sentence-reading fluency, and word-reading fluency. Conclusions While children with hearing loss have historically struggled to achieve age-appropriate reading skills in elementary school, the participants in this study achieved mean scores within the average range. Returning participants made more than 1 year's progress in 1 year's time in several areas of reading while enrolled in a specialized program. Clinical and educational implications, including strategies to develop reading fluency, are addressed.


2009 ◽  
Vol 21 (1) ◽  
pp. 169-179 ◽  
Author(s):  
Chotiga Pattamadilok ◽  
Laetitia Perre ◽  
Stéphane Dufau ◽  
Johannes C. Ziegler

Literacy changes the way the brain processes spoken language. Most psycholinguists believe that orthographic effects on spoken language are either strategic or restricted to meta-phonological tasks. We used event-related brain potentials (ERPs) to investigate the locus and the time course of orthographic effects on spoken word recognition in a semantic task. Participants were asked to decide whether a given word belonged to a semantic category (body parts). On no-go trials, words were presented that were either orthographically consistent or inconsistent. Orthographic inconsistency (i.e., multiple spellings of the same phonology) could occur either in the first or the second syllable. The ERP data showed a clear orthographic consistency effect that preceded lexical access and semantic effects. Moreover, the onset of the orthographic consistency effect was time-locked to the arrival of the inconsistency in a spoken word, which suggests that orthography influences spoken language in a time-dependent manner. The present data join recent evidence from brain imaging showing orthographic activation in spoken language tasks. Our results extend those findings by showing that orthographic activation occurs early and affects spoken word recognition in a semantic task that does not require the explicit processing of orthographic or phonological structure.


2020 ◽  
Vol 140 ◽  
pp. 107383
Author(s):  
Xiaohong Yang ◽  
Xiuping Zhang ◽  
Ying Zhang ◽  
Qian Zhang ◽  
Xiaoqing Li

Sign in / Sign up

Export Citation Format

Share Document