Brain Oscillations during Spoken Sentence Processing

2012 ◽  
Vol 24 (5) ◽  
pp. 1149-1164 ◽  
Author(s):  
Marcela Peña ◽  
Lucia Melloni

Spoken sentence comprehension relies on rapid and effortless temporal integration of speech units displayed at different rates. Temporal integration refers to how chunks of information perceived at different time scales are linked together by the listener in mapping speech sounds onto meaning. The neural implementation of this integration remains unclear. This study explores the role of short and long windows of integration in accessing meaning from long samples of speech. In a cross-linguistic study, we explore the time course of oscillatory brain activity between 1 and 100 Hz, recorded using EEG, during the processing of native and foreign languages. We compare oscillatory responses in a group of Italian and Spanish native speakers while they attentively listen to Italian, Japanese, and Spanish utterances, played either forward or backward. The results show that both groups of participants display a significant increase in gamma band power (55–75 Hz) only when they listen to their native language played forward. The increase in gamma power starts around 1000 msec after the onset of the utterance and decreases by its end, resembling the time course of access to meaning during speech perception. In contrast, changes in low-frequency power show similar patterns for both native and foreign languages. We propose that gamma band power reflects a temporal binding phenomenon concerning the coordination of neural assemblies involved in accessing meaning of long samples of speech.

2017 ◽  
Vol 114 (18) ◽  
pp. E3669-E3678 ◽  
Author(s):  
Matthew J. Nelson ◽  
Imen El Karoui ◽  
Kristof Giber ◽  
Xiaofang Yang ◽  
Laurent Cohen ◽  
...  

Although sentences unfold sequentially, one word at a time, most linguistic theories propose that their underlying syntactic structure involves a tree of nested phrases rather than a linear sequence of words. Whether and how the brain builds such structures, however, remains largely unknown. Here, we used human intracranial recordings and visual word-by-word presentation of sentences and word lists to investigate how left-hemispheric brain activity varies during the formation of phrase structures. In a broad set of language-related areas, comprising multiple superior temporal and inferior frontal sites, high-gamma power increased with each successive word in a sentence but decreased suddenly whenever words could be merged into a phrase. Regression analyses showed that each additional word or multiword phrase contributed a similar amount of additional brain activity, providing evidence for a merge operation that applies equally to linguistic objects of arbitrary complexity. More superficial models of language, based solely on sequential transition probability over lexical and syntactic categories, only captured activity in the posterior middle temporal gyrus. Formal model comparison indicated that the model of multiword phrase construction provided a better fit than probability-based models at most sites in superior temporal and inferior frontal cortices. Activity in those regions was consistent with a neural implementation of a bottom-up or left-corner parser of the incoming language stream. Our results provide initial intracranial evidence for the neurophysiological reality of the merge operation postulated by linguists and suggest that the brain compresses syntactically well-formed sequences of words into a hierarchy of nested phrases.


2004 ◽  
Vol 16 (3) ◽  
pp. 503-522 ◽  
Author(s):  
Matthias M. Müller ◽  
Andreas Keil

In the present study, subjects selectively attended to the color of checkerboards in a feature-based attention paradigm. Induced gamma band responses (GBRs), the induced alpha band, and the event-related potential (ERP) were analyzed to uncover neuronal dynamics during selective feature processing. Replicating previous ERP findings, the selection negativity (SN) with a latency of about 160 msec was extracted. Furthermore, and similarly to previous EEG studies, a gamma band peak in a time window between 290 and 380 msec was found. This peak had its major energy in the 55to 70-Hz range and was significantly larger for the attended color. Contrary to previous human induced gamma band studies, a much earlier 40to 50-Hz peak in a time window between 160 and 220 msec after stimulus onset and, thus, concurrently to the SN was prominent with significantly more energy for attended as opposed to unattended color. The induced alpha band (9.8–11.7 Hz), on the other hand, exhibited a marked suppression for attended color in a time window between 450 and 600 msec after stimulus onset. A comparison of the time course of the 40to 50-Hz and 55to 70-Hz induced GBR, the induced alpha band, and the ERP revealed temporal coincidences for changes in the morphology of these brain responses. Despite these similarities in the time domain, the cortical source configuration was found to discriminate between induced GBRs and the SN. Our results suggest that large-scale synchronous high-frequency brain activity as measured in the human GBR play a specific role in attentive processing of stimulus features.


2021 ◽  
Author(s):  
Andrew W Corcoran ◽  
Ricardo Perera ◽  
Matthieu Koroma ◽  
Sid Kouider ◽  
Jakob Hohwy ◽  
...  

Online speech processing imposes significant computational demands on the listening brain. Predictive coding provides an elegant account of the way this challenge is met through the exploitation of prior knowledge. While such accounts have accrued considerable evidence at the sublexical- and word-levels, relatively little is known about the predictive mechanisms that support sentence-level processing. Here, we exploit the 'pop-out' phenomenon (i.e. dramatic improvement in the intelligibility of degraded speech following prior information) to investigate the psychophysiological correlates of sentence comprehension. We recorded electroencephalography and pupillometry from 21 humans (10 females) while they rated the clarity of full sentences that had been degraded via noise-vocoding or sine-wave synthesis. Sentence pop-out was reliably elicited following visual presentation of the corresponding written sentence, despite never hearing the undistorted speech. No such effect was observed following incongruent or no written information. Pop-out was associated with improved reconstruction of the acoustic stimulus envelope from low-frequency EEG activity, implying that pop-out is mediated via top-down signals that enhance the precision of cortical speech representations. Spectral analysis revealed that pop-out was accompanied by a reduction in theta-band power, consistent with predictive coding accounts of acoustic filling-in and incremental sentence processing. Moreover, delta- and alpha-band power, as well as pupil diameter, were increased following the provision of any written information. We interpret these findings as evidence of a transition to a state of active listening, whereby participants selectively engage attentional and working memory processes to evaluate the congruence between expected and actual sensory input.


2020 ◽  
Author(s):  
Clara Cámara ◽  
Cristina de la Malla ◽  
Josep Marco-Pallarés ◽  
Joan López-Moliner

ABSTRACTEvery time we use our smartphone, tablet, or other electronic devices we are exposed to temporal delays between our actions and the sensory feedback. We can compensate for such delays by adjusting our motor commands and doing so likely requires establishing new temporal mappings between motor areas and sensory predictions. However, little is known about the neural underpinnings that would support building new temporal correspondences between different brain areas. We here address the possibility that communication through coherence, which is thought to support neural interareal communication, lies behind the neural processes accounting for how humans cope with additional delays between motor and sensory areas. We recorded EEG activity while participants intercepted moving targets while seeing a cursor that followed their hand with a delay rather than their own hand. Participants adjusted their movements to the delayed visual feedback and intercepted the target with the cursor. The EEG data shows a significant increase in coherence of beta and gamma bands between visual and motor areas during the hand on-going movement towards interception. However, when looking at differences between participants depending on the level of adaptation, only the increase in gamma band correlated with the level of temporal adaptation. We are able to describe the time course of the coherence using coupled oscillators showing that the times at which high coherence is achieved are within useful ranges to solve the task. Altogether, these results evidence the functional relevance of brain coherence in a complex task where adapting to new delays is crucial.AUTHOR SUMMARYHumans are often exposed to delays between their actions and the incoming sensory feedback caused by actions. While there have been advances in the understanding of the conditions at which temporal adaptation can occur, little is known about the neural mechanisms enabling temporal adaptation. In the present study we measure brain activity (EEG) to investigate whether communication through coherence between motor and sensory areas might be responsible for one’s ability to cope with externally imposed delays in an interception task. We show evidence that neural coherence at gamma band between visual and motor areas is related to the degree of adaptation to temporal delays.


2012 ◽  
Vol 34 (1) ◽  
pp. 5-44 ◽  
Author(s):  
LLORENÇ ANDREU ◽  
MÒNICA SANZ-TORRENT ◽  
JOHN C. TRUESWELL

ABSTRACTTwenty-five children with specific language impairment (SLI; age 5 years, 3 months [5;3]–8;2), 50 typically developing children (3;3–8;2), and 31 normal adults participated in three eye-tracking experiments of spoken language comprehension that were designed to investigate the use of verb information during real-time sentence comprehension in Spanish. In Experiment 1, participants heard sentences like El niño recorta con cuidado el papel (The boy trims carefully the paper) in the presence of four depicted objects, only one of which satisfied the semantic restrictions of the verb recorta (e.g., paper, clock, fox, and dinosaur). Eye movements revealed that children with SLI, like other groups, were able to recognize and retrieve the meaning of the verb rapidly enough to anticipate the upcoming semantically appropriate referent, prior to actually hearing the noun phrase el papel (the paper). Experiments 2 and 3 revealed that for all groups of participants, anticipatory eye movements were also modulated by the semantic fit of the object serving as the patient/theme of the verb. Relatively fine-grained semantic information of a verb was computed fast enough even by children with SLI to result in anticipatory eye movements to semantically appropriate referents. Children with SLI did differ from age-matched controls, but only slightly in terms of overall anticipatory looking at target objects; the time course of looking between these groups was quite similar. In addition, no differences were found between children with SLI and control children matched for mean length of utterance. Implications for theories that characterize SLI are discussed.


2020 ◽  
Vol 13 (1) ◽  
Author(s):  
Isabel D. Friesner ◽  
Erik Martinez ◽  
Haocheng Zhou ◽  
Jonathan Douglas Gould ◽  
Anna Li ◽  
...  

Abstract Chronic pain alters cortical and subcortical plasticity, causing enhanced sensory and affective responses to peripheral nociceptive inputs. Previous studies have shown that ketamine had the potential to inhibit abnormally amplified affective responses of single neurons by suppressing hyperactivity in the anterior cingulate cortex (ACC). However, the mechanism of this enduring effect has yet to be understood at the network level. In this study, we recorded local field potentials from the ACC of freely moving rats. Animals were injected with complete Freund’s adjuvant (CFA) to induce persistent inflammatory pain. Mechanical stimulations were administered to the hind paw before and after CFA administration. We found a significant increase in the high-gamma band (60–100 Hz) power in response to evoked pain after CFA treatment. Ketamine, however, reduced the high-gamma band power in response to evoked pain in CFA-treated rats. In addition, ketamine had a sustained effect on the high-gamma band power lasting up to five days after a single dose administration. These results demonstrate that ketamine has the potential to alter maladaptive neural responses in the ACC induced by chronic pain.


2021 ◽  
Author(s):  
Daniela Mertzen ◽  
Dario Paape ◽  
Brian Dillon ◽  
Ralf Engbert ◽  
Shravan Vasishth

A long-standing debate in the sentence processing literature concerns the time course of syntactic and semantic information in online sentence comprehension. The default assumption in cue-based models of parsing is that syntactic and semantic retrieval cues simultaneously guide dependency resolution. When retrieval cues match multiple items in memory, this leads to similarity-based interference. Both semantic and syntactic interferencehave been shown to occur in English. However, the relative timing of syntactic vs. semantic interference remains unclear. In this first-ever cross-linguistic investigation of the time course of syntactic vs. semantic interference, the data from two eye-tracking reading experiments (English and German) suggest that the two types of interference can in principle arise simultaneously during retrieval. However, the data also indicate that semantic cues may be evaluated with a small timing lag in German compared to English. This suggests that there may be cross-linguistic variation in how syntactic and semantic cues are used to resolve linguistic dependencies in real-time.


Author(s):  
Margreet Vogelzang ◽  
Christiane M. Thiel ◽  
Stephanie Rosemann ◽  
Jochem W. Rieger ◽  
Esther Ruigendijk

Purpose Adults with mild-to-moderate age-related hearing loss typically exhibit issues with speech understanding, but their processing of syntactically complex sentences is not well understood. We test the hypothesis that listeners with hearing loss' difficulties with comprehension and processing of syntactically complex sentences are due to the processing of degraded input interfering with the successful processing of complex sentences. Method We performed a neuroimaging study with a sentence comprehension task, varying sentence complexity (through subject–object order and verb–arguments order) and cognitive demands (presence or absence of a secondary task) within subjects. Groups of older subjects with hearing loss ( n = 20) and age-matched normal-hearing controls ( n = 20) were tested. Results The comprehension data show effects of syntactic complexity and hearing ability, with normal-hearing controls outperforming listeners with hearing loss, seemingly more so on syntactically complex sentences. The secondary task did not influence off-line comprehension. The imaging data show effects of group, sentence complexity, and task, with listeners with hearing loss showing decreased activation in typical speech processing areas, such as the inferior frontal gyrus and superior temporal gyrus. No interactions between group, sentence complexity, and task were found in the neuroimaging data. Conclusions The results suggest that listeners with hearing loss process speech differently from their normal-hearing peers, possibly due to the increased demands of processing degraded auditory input. Increased cognitive demands by means of a secondary visual shape processing task influence neural sentence processing, but no evidence was found that it does so in a different way for listeners with hearing loss and normal-hearing listeners.


Sign in / Sign up

Export Citation Format

Share Document