Grammatical predictors for fMRI time-courses

2019 ◽  
pp. 159-173
Author(s):  
Jixing Li ◽  
John Hale

This study examines several different time-series formalizations of sentence-processing effort, as regards their ability to predict the observed fMRI time-course in regions of the brain. These regressors formalize cognitive theories of language processing involving phrase structure parsing, memory burden, lexical meaning, and other factors such as word sequence probabilities. The results suggest that even in the presence of these covariates, a predictor based on minimalist grammars significantly improves a regression model of the BOLD signal in a posterior temporal region, roughly corresponding to Wernicke’s area.

Author(s):  
Laura Roche Chapman ◽  
Brooke Hallowell

Purpose: Arousal and cognitive effort are relevant yet often overlooked components of attention during language processing. Pupillometry can be used to provide a psychophysiological index of arousal and cognitive effort. Given that much is unknown regarding the relationship between cognition and language deficits seen in people with aphasia (PWA), pupillometry may be uniquely suited to explore those relationships. The purpose of this study was to examine arousal and the time course of the allocation of cognitive effort related to sentence processing in people with and without aphasia. Method: Nineteen PWA and age- and education-matched control participants listened to relatively easy (subject-relative) and relatively difficult (object-relative) sentences and were required to answer occasional comprehension questions. Tonic and phasic pupillary responses were used to index arousal and the unfolding of cognitive effort, respectively, while sentences were processed. Group differences in tonic and phasic responses were examined. Results: Group differences were observed for both tonic and phasic responses. PWA exhibited greater overall arousal throughout the task compared with controls, as evidenced by larger tonic pupil responses. Controls exhibited more effort (greater phasic responses) for difficult compared with easy sentences; PWA did not. Group differences in phasic responses were apparent during end-of-sentence and postsentence time windows. Conclusions: Results indicate that the attentional state of PWA in this study was not consistently supportive of adequate task engagement. PWA in our sample may have relatively limited attentional capacity or may have challenges with allocating existing capacity in ways that support adequate task engagement and performance. This work adds to the body of evidence supporting the validity of pupillometric tasks for the study of aphasia and contributes to a better understanding of the nature of language deficits in aphasia. Supplemental Material https://doi.org/10.23641/asha.16959376


2019 ◽  
Author(s):  
Sophie Arana ◽  
André Marquand ◽  
Annika Hultén ◽  
Peter Hagoort ◽  
Jan-Mathijs Schoffelen

AbstractThe meaning of a sentence can be understood, whether presented in written or spoken form. Therefore it is highly probable that brain processes supporting language comprehension are at least partly independent of sensory modality. To identify where and when in the brain language processing is independent of sensory modality, we directly compared neuromagnetic brain signals of 200 human subjects (102 males) either reading or listening to sentences. We used multiset canonical correlation analysis to align individual subject data in a way that boosts those aspects of the signal that are common to all, allowing us to capture word-by-word signal variations, consistent across subjects and at a fine temporal scale. Quantifying this consistency in activation across both reading and listening tasks revealed a mostly left hemispheric cortical network. Areas showing consistent activity patterns include not only areas previously implicated in higher-level language processing, such as left prefrontal, superior & middle temporal areas and anterior temporal lobe, but also parts of the control-network as well as subcentral and more posterior temporal-parietal areas. Activity in this supramodal sentence processing network starts in temporal areas and rapidly spreads to the other regions involved. The findings do not only indicate the involvement of a large network of brain areas in supramodal language processing, but also indicate that the linguistic information contained in the unfolding sentences modulates brain activity in a word-specific manner across subjects.


Author(s):  
Marta Kutas ◽  
Kara D. Federmeier

The intact human brain is the only known system that can interpret and respond to various visual and acoustic patterns. Therefore, unlike researchers of other cognitive phenomena, (neuro)psycholinguists cannot avail themselves of invasive techniques in non-human animals to uncover the responsible mechanisms in the large parts of the (human) brain that have been implicated in language processing. Engagement of these different anatomical areas does, however, generate distinct patterns of biological activity (such as ion flow across neural membranes) that can be recorded inside and outside the heads of humans as they quickly, often seamlessly, and without much conscious reflection on the computations and linguistic regularities involved, understand spoken, written, or signed sentences. This article summarizes studies of event-related brain potentials and sentence processing. It discusses electrophysiology, language and the brain, processing language meaning, context effects in meaning processing, non-literal language processing, processing language form, parsing, slow potentials and the closure positive shift, and plasticity and learning.


2015 ◽  
Vol 27 (8) ◽  
pp. 1542-1551 ◽  
Author(s):  
Kristof Strijkers ◽  
Daisy Bertrand ◽  
Jonathan Grainger

We investigated how linguistic intention affects the time course of visual word recognition by comparing the brain's electrophysiological response to a word's lexical frequency, a well-established psycholinguistic marker of lexical access, when participants actively retrieve the meaning of the written input (semantic categorization) versus a situation where no language processing is necessary (ink color categorization). In the semantic task, the ERPs elicited by high-frequency words started to diverge from those elicited by low-frequency words as early as 120 msec after stimulus onset. On the other hand, when categorizing the colored font of the very same words in the color task, word frequency did not modulate ERPs until some 100 msec later (220 msec poststimulus onset) and did so for a shorter period and with a smaller scalp distribution. The results demonstrate that, although written words indeed elicit automatic recognition processes in the brain, the speed and quality of lexical processing critically depends on the top–down intention to engage in a linguistic task.


2021 ◽  
pp. 1-62
Author(s):  
Orsolya B Kolozsvári ◽  
Weiyong Xu ◽  
Georgia Gerike ◽  
Tiina Parviainen ◽  
Lea Nieminen ◽  
...  

Speech perception is dynamic and shows changes across development. In parallel, functional differences in brain development over time have been well documented and these differences may interact with changes in speech perception during infancy and childhood. Further, there is evidence that the two hemispheres contribute unequally to speech segmentation at the sentence and phonemic levels. To disentangle those contributions, we studied the cortical tracking of various sized units of speech that are crucial for spoken language processing in children (4.7-9.3 year-olds, N=34) and adults (N=19). We measured participants’ magnetoencephalogram (MEG) responses to syllables, words and sentences, calculated the coherence between the speech signal and MEG responses at the level of words and sentences, and further examined auditory evoked responses to syllables. Age-related differences were found for coherence values at the delta and theta frequency bands. Both frequency bands showed an effect of stimulus type, although this was attributed to the length of the stimulus and not linguistic unit size. There was no difference between hemispheres at the source level either in coherence values for word or sentence processing or in evoked response to syllables. Results highlight the importance of the lower frequencies for speech tracking in the brain across different lexical units. Further, stimulus length affects the speech-brain associations suggesting methodological approaches should be selected carefully when studying speech envelope processing at the neural level. Speech tracking in the brain seems decoupled from more general maturation of the auditory cortex.


2004 ◽  
Vol 16 (2) ◽  
pp. 167-177 ◽  
Author(s):  
M. Allison Cato ◽  
Bruce Crosson ◽  
Didem Gökçay ◽  
David Soltysik ◽  
Christina Wierenga ◽  
...  

Responses of rostral frontal and retrosplenial cortices to the emotional significance of words were measured using functional magnetic resonance imaging (fMRI). Twenty-six strongly righthanded participants engaged in a language task that alternated between silent word generation to categories with positive, negative, or neutral emotional connotation and a baseline task of silent repetition of emotionally neutral words. Activation uniquely associated with word generation to categories with positive or negative versus neutral emotional connotation occurred bilaterally in rostral frontal and retrosplenial cortices. Furthermore, the time courses of activity in these areas differed, indicating that they subserve different functions in processing the emotional connotation of words. Namely, the retrosplenial cortex appears to be involved in evaluating the emotional salience of information from external sources, whereas the rostral frontal cortex also plays a role in internal generation of words with emotional connotation. In both areas, activity associated with positive or negative emotional connotation was more extensive in the left hemisphere than the right, regardless of valence, presumably due to the language demands of word generation. The present findings localize specific areas in the brain that are involved in processing emotional meaning of words within the brain's distributed semantic system. In addition, time course analysis reveals diverging mechanisms in anterior and posterior cortical areas during processing of words with emotional significance.


1992 ◽  
Vol 12 (4) ◽  
pp. 535-545 ◽  
Author(s):  
Thomas McLaughlin ◽  
Bruce Steinberg ◽  
Birger Christensen ◽  
Ian Law ◽  
Agnete Parving ◽  
...  

We used changes in regional cerebral blood flow (rCBF) to disclose regions involved in central auditory and language processing in the normal brain. rCBF was quantified with a fast-rotating, single-photon emission computerized tomograph (SPECT) and inhalation of 133Xe. rCBF data were obtained simultaneously from parallel, transverse slices of the brain. The lower slice was positioned to include both Broca's and Wernicke's areas. The upper slice included regions generally regarded by neurobehaviorists as less related to primary auditory or linguistic functions. We presented three types of auditory stimuli to ten healthy, young volunteers: (a) diotically presented Danish speech, (b) dichotic word stimulation, and (c) white noise. Wilcoxon's signed ranks sum test revealed increased rCBF in language-related areas of cortex, viz., Wernicke's area and its right-sided homologous area as well as in Broca's area (left hemisphere), when subjects listened to narrative speech, compared to white noise (baseline). No significant rCBF differences were detected with this test during dichotic stimulation vs. white noise. A more sophisticated statistical method (factor analysis) disclosed patterns of functionally intercorrelated regions. The factor analysis reduced the highly intercorrelated rCBF measures from 28 regions of interest to a set of three independent factors. These factors accounted for 77% of the total variation in rCBF values. These three factors appeared to represent statistical analogues of independent brain networks involved in (I) auditory/linguistic, (II) attentional, and (III) visual imaging activity.


2018 ◽  
Vol 36 (1) ◽  
pp. 14-20 ◽  
Author(s):  
Lingmin Jin ◽  
Jinbo Sun ◽  
Ziliang Xu ◽  
Xuejuan Yang ◽  
Peng Liu ◽  
...  

Objective To use a promising analytical method, namely intersubject synchronisation (ISS), to evaluate the brain activity associated with the instant effects of acupuncture and compare the findings with traditional general linear model (GLM) methods. Methods 30 healthy volunteers were recruited for this study. Block-designed manual acupuncture stimuli were delivered at SP6, and de qi sensations were measured after acupuncture stimulation. All subjects underwent functional MRI (fMRI) scanning during the acupuncture stimuli. The fMRI data were separately analysed by ISS and traditional GLM methods. Results All subjects experienced de qi sensations. ISS analysis showed that the regions activated during acupuncture stimulation at SP6 were mainly divided into five clusters based on the time courses. The time courses of clusters 1 and 2 were in line with the acupuncture stimulation pattern, and the active regions were mainly involved in the sensorimotor system and salience network. Clusters 3, 4 and 5 displayed an almost contrary time course relative to the stimulation pattern. The brain regions activated included the default mode network, descending pain modulation pathway and visual cortices. GLM analysis indicated that the brain responses associated with the instant effects of acupuncture were largely implicated in sensory and motor processing and sensory integration. Conclusion The ISS analysis considered the sustained effect of acupuncture and uncovered additional information not shown by GLM analysis. We suggest that ISS may be a suitable approach to investigate the brain responses associated with the instant effects of acupuncture.


2018 ◽  
Author(s):  
Lin Wang ◽  
Gina Kuperberg ◽  
Ole Jensen

AbstractPrevious studies suggest that people generate predictions during language comprehension at multiple linguistic levels. It has been hypothesized that, under some circumstances, this can result in the pre-activation of specific lexico-semantic representations. We asked whether such representationally specific semantic pre-activation can be detected in the brain ahead of encountering bottom-up input. We measured MEG activity as participants read highly constraining sentences in which the final word could be predicted. We found that both spatial and temporal patterns of the brain activity prior to the onset of this word were more similar when the same words were predicted than when different words were predicted. This pre-activation was transient and engaged a left inferior and medial temporal region. These results suggest that unique spatial patterns of neural activity associated with the pre-activation of distributed semantic representations can be detected prior to the appearance of new sensory input, and that the left inferior and medial temporal regions may play a role in temporally binding such representations, giving rise to specific lexico-semantic predictions.


Author(s):  
Jennifer M. Roche ◽  
Arkady Zgonnikov ◽  
Laura M. Morett

Purpose The purpose of the current study was to evaluate the social and cognitive underpinnings of miscommunication during an interactive listening task. Method An eye and computer mouse–tracking visual-world paradigm was used to investigate how a listener's cognitive effort (local and global) and decision-making processes were affected by a speaker's use of ambiguity that led to a miscommunication. Results Experiments 1 and 2 found that an environmental cue that made a miscommunication more or less salient impacted listener language processing effort (eye-tracking). Experiment 2 also indicated that listeners may develop different processing heuristics dependent upon the speaker's use of ambiguity that led to a miscommunication, exerting a significant impact on cognition and decision making. We also found that perspective-taking effort and decision-making complexity metrics (computer mouse tracking) predict language processing effort, indicating that instances of miscommunication produced cognitive consequences of indecision, thinking, and cognitive pull. Conclusion Together, these results indicate that listeners behave both reciprocally and adaptively when miscommunications occur, but the way they respond is largely dependent upon the type of ambiguity and how often it is produced by the speaker.


Sign in / Sign up

Export Citation Format

Share Document