scholarly journals Syntactic and Semantic Specialization and Integration in 5- to 6-Year-Old Children during Auditory Sentence Processing

2020 ◽  
Vol 32 (1) ◽  
pp. 36-49 ◽  
Author(s):  
Jin Wang ◽  
Mabel L. Rice ◽  
James R. Booth

Previous studies have found specialized syntactic and semantic processes in the adult brain during language comprehension. Young children have sophisticated semantic and syntactic aspects of language, yet many previous fMRI studies failed to detect this specialization, possibly due to experimental design and analytical methods. In this current study, 5- to 6-year-old children completed a syntactic task and a semantic task to dissociate these two processes. Multivoxel pattern analysis was used to examine the correlation of patterns within a task (between runs) or across tasks. We found that the left middle temporal gyrus showed more similar patterns within the semantic task compared with across tasks, whereas there was no difference in the correlation within the syntactic task compared with across tasks, suggesting its specialization in semantic processing. Moreover, the left superior temporal gyrus showed more similar patterns within both the semantic task and the syntactic task as compared with across tasks, suggesting its role in integration of semantic and syntactic information. In contrast to the temporal lobe, we did not find specialization or integration effects in either the opercular or triangular part of the inferior frontal gyrus. Overall, our study showed that 5- to 6-year-old children have already developed specialization and integration in the temporal lobe, but not in the frontal lobe, consistent with developmental neurocognitive models of language comprehension in typically developing young children.

2007 ◽  
Vol 19 (10) ◽  
pp. 1609-1623 ◽  
Author(s):  
Jens Brauer ◽  
Angela D. Friederici

The functional neuroanatomy of language in the adult brain separates semantic and syntactic processes in the superior temporal gyrus (STG) and in the inferior frontal cortex. It is unknown whether a similar specialization is present in the developing brain. Semantic and syntactic aspects of sentence processing were investigated in 5- to 6-year-old children and in adults using functional magnetic resonance imaging. Although adults demonstrated function-specific activations in the STG and the frontal operculum, children showed a large activation overlap for these two language aspects in the STG. Compared to adults, they engaged additional areas in the left and right inferior frontal gyrus, which are known to support resource demanding processes. Thus, the language networks for semantic and syntactic processes are not yet specialized similarly to adults in the developing brain.


2019 ◽  
Author(s):  
Thomas Cope ◽  
Yury Shtyrov ◽  
Lucy MacGregor ◽  
Rachel Holland ◽  
Friedemann Pulvermüller ◽  
...  

AbstractIn the healthy human brain, the processing of spoken words is strongly left-lateralised, while the processing of complex non-linguistic sounds recruits brain regions bilaterally. Here we asked whether the left anterior temporal lobe, strongly implicated in semantic processing, is critical to this special treatment of linguistic stimuli. Nine patients with semantic dementia (SD) and fourteen age-matched controls underwent magnetoencephalography and structural MRI. Voxel based morphometry demonstrated the stereotypical pattern of SD: severe grey matter loss restricted to the left anterior temporal lobe. During magnetoencephalography, participants listened to word sets in which identity and meaning were ambiguous until utterance completion, for example played vs plate. Whereas left-hemispheric responses were similar across groups, patients demonstrated increased right hemisphere activity 174-294ms after stimulus disambiguation. Source reconstructions confirmed recruitment of right-sided analogues of language regions in SD: atrophy of left anterior temporal lobe was associated with increased activity in right temporal pole, middle temporal gyrus, inferior frontal gyrus and supramarginal gyrus. Moreover only healthy controls had differential responses to words versus non-words in right auditory cortex and planum temporale. Overall, the results indicate that anterior temporal lobe is necessary for normal and efficient processing of word identity in the rest of the language network.


2006 ◽  
Vol 18 (11) ◽  
pp. 1789-1798 ◽  
Author(s):  
Angela Bartolo ◽  
Francesca Benuzzi ◽  
Luca Nocetti ◽  
Patrizia Baraldi ◽  
Paolo Nichelli

Humor is a unique ability in human beings. Suls [A two-stage model for the appreciation of jokes and cartoons. In P. E. Goldstein & J. H. McGhee (Eds.), The psychology of humour. Theoretical perspectives and empirical issues. New York: Academic Press, 1972, pp. 81–100] proposed a two-stage model of humor: detection and resolution of incongruity. Incongruity is generated when a prediction is not confirmed in the final part of a story. To comprehend humor, it is necessary to revisit the story, transforming an incongruous situation into a funny, congruous one. Patient and neuroimaging studies carried out until now lead to different outcomes. In particular, patient studies found that right brain-lesion patients have difficulties in humor comprehension, whereas neuroimaging studies suggested a major involvement of the left hemisphere in both humor detection and comprehension. To prevent activation of the left hemisphere due to language processing, we devised a nonverbal task comprising cartoon pairs. Our findings demonstrate activation of both the left and the right hemispheres when comparing funny versus nonfunny cartoons. In particular, we found activation of the right inferior frontal gyrus (BA 47), the left superior temporal gyrus (BA 38), the left middle temporal gyrus (BA 21), and the left cerebellum. These areas were also activated in a nonverbal task exploring attribution of intention [Brunet, E., Sarfati, Y., Hardy-Bayle, M. C., & Decety, J. A PET investigation of the attribution of intentions with a nonverbal task. Neuroimage, 11, 157–166, 2000]. We hypothesize that the resolution of incongruity might occur through a process of intention attribution. We also asked subjects to rate the funniness of each cartoon pair. A parametric analysis showed that the left amygdala was activated in relation to subjective amusement. We hypothesize that the amygdala plays a key role in giving humor an emotional dimension.


2015 ◽  
Vol 122 (2) ◽  
pp. 250-261 ◽  
Author(s):  
Edward F. Chang ◽  
Kunal P. Raygor ◽  
Mitchel S. Berger

Classic models of language organization posited that separate motor and sensory language foci existed in the inferior frontal gyrus (Broca's area) and superior temporal gyrus (Wernicke's area), respectively, and that connections between these sites (arcuate fasciculus) allowed for auditory-motor interaction. These theories have predominated for more than a century, but advances in neuroimaging and stimulation mapping have provided a more detailed description of the functional neuroanatomy of language. New insights have shaped modern network-based models of speech processing composed of parallel and interconnected streams involving both cortical and subcortical areas. Recent models emphasize processing in “dorsal” and “ventral” pathways, mediating phonological and semantic processing, respectively. Phonological processing occurs along a dorsal pathway, from the posterosuperior temporal to the inferior frontal cortices. On the other hand, semantic information is carried in a ventral pathway that runs from the temporal pole to the basal occipitotemporal cortex, with anterior connections. Functional MRI has poor positive predictive value in determining critical language sites and should only be used as an adjunct for preoperative planning. Cortical and subcortical mapping should be used to define functional resection boundaries in eloquent areas and remains the clinical gold standard. In tracing the historical advancements in our understanding of speech processing, the authors hope to not only provide practicing neurosurgeons with additional information that will aid in surgical planning and prevent postoperative morbidity, but also underscore the fact that neurosurgeons are in a unique position to further advance our understanding of the anatomy and functional organization of language.


2021 ◽  
Author(s):  
Constantijn L van der Burght ◽  
Ole Numssen ◽  
Benito Schlaak ◽  
Tomás Goucha ◽  
Gesa Hartwigsen

Auditory language comprehension involves processing the content (semantics), grammar (syntax), and intonation (prosody) of a sentence. Sentence processing guided by prosody has been shown to involve the left inferior frontal gyrus (IFG). Prosodic cues are known to interact closely with both syntax and semantics, yet, whether these two processing domains can be attributed to separate subregions within the left IFG is highly debated. We probed the causal role of the posterior IFG (pIFG) for syntactic processing and the anterior IFG (aIFG) for semantic processing in a task that required the interpretation of the sentence’s prosodic realisation. Healthy participants performed a sentence completion task with syntactic and semantic decisions, while receiving 10 Hz repetitive transcranial magnetic stimulation (rTMS) over either left aIFG, pIFG, or vertex (control). Although the behavioural analysis showed no significant interaction between rTMS site and decision, electrical field simulations revealed a task-specific facilitation effect: stronger pIFG stimulation led to faster syntactic processing without significantly modulating semantic decisions. In contrast, aIFG stimulation had an unspecific inhibitory effect. These results provide evidence for the functional relevance of left pIFG in grammatical processing guided by intonation. The unspecific inhibitory effect of aIFG rTMS highlights this subregion’s role in domain-general processes.


2015 ◽  
Vol 27 (7) ◽  
pp. 1388-1396 ◽  
Author(s):  
Rebecca L. Jackson ◽  
Matthew A. Lambon Ralph ◽  
Gorana Pobric

Despite indications that regions within the anterior temporal lobe (ATL) might make a crucial contribution to pan-modal semantic representation, to date there have been no investigations of when during semantic processing the ATL plays a critical role. To test the timing of the ATL involvement in semantic processing, we studied the effect of double-pulse TMS on behavioral responses in semantic and difficulty-matched control tasks. Chronometric TMS was delivered over the left ATL (10 mm from the tip of the temporal pole along the middle temporal gyrus). During each trial, two pulses of TMS (40 msec apart) were delivered either at baseline (before stimulus presentation) or at one of the experimental time points 100, 250, 400, and 800 msec poststimulus onset. A significant disruption to performance was identified from 400 msec on the semantic task but not on the control assessment. Our results not only reinforce the key role of the left ATL in semantic representation but also indicate that its contribution is especially important around 400 msec poststimulus onset. Together, these facts suggest that the ATL may be one of the neural sources of the N400 ERP component.


2012 ◽  
Vol 24 (1) ◽  
pp. 133-147 ◽  
Author(s):  
Carin Whitney ◽  
Marie Kirk ◽  
Jamie O'Sullivan ◽  
Matthew A. Lambon Ralph ◽  
Elizabeth Jefferies

To understand the meanings of words and objects, we need to have knowledge about these items themselves plus executive mechanisms that compute and manipulate semantic information in a task-appropriate way. The neural basis for semantic control remains controversial. Neuroimaging studies have focused on the role of the left inferior frontal gyrus (LIFG), whereas neuropsychological research suggests that damage to a widely distributed network elicits impairments of semantic control. There is also debate about the relationship between semantic and executive control more widely. We used TMS in healthy human volunteers to create “virtual lesions” in structures typically damaged in patients with semantic control deficits: LIFG, left posterior middle temporal gyrus (pMTG), and intraparietal sulcus (IPS). The influence of TMS on tasks varying in semantic and nonsemantic control demands was examined for each region within this hypothesized network to gain insights into (i) their functional specialization (i.e., involvement in semantic representation, controlled retrieval, or selection) and (ii) their domain dependence (i.e., semantic or cognitive control). The results revealed that LIFG and pMTG jointly support both the controlled retrieval and selection of semantic knowledge. IPS specifically participates in semantic selection and responds to manipulations of nonsemantic control demands. These observations are consistent with a large-scale semantic control network, as predicted by lesion data, that draws on semantic-specific (LIFG and pMTG) and domain-independent executive components (IPS).


2014 ◽  
Vol 369 (1651) ◽  
pp. 20130296 ◽  
Author(s):  
Aslı Özyürek

As we speak, we use not only the arbitrary form–meaning mappings of the speech channel but also motivated form–meaning correspondences, i.e. iconic gestures that accompany speech (e.g. inverted V-shaped hand wiggling across gesture space to demonstrate walking). This article reviews what we know about processing of semantic information from speech and iconic gestures in spoken languages during comprehension of such composite utterances. Several studies have shown that comprehension of iconic gestures involves brain activations known to be involved in semantic processing of speech: i.e. modulation of the electrophysiological recording component N400, which is sensitive to the ease of semantic integration of a word to previous context, and recruitment of the left-lateralized frontal–posterior temporal network (left inferior frontal gyrus (IFG), medial temporal gyrus (MTG) and superior temporal gyrus/sulcus (STG/S)). Furthermore, we integrate the information coming from both channels recruiting brain areas such as left IFG, posterior superior temporal sulcus (STS)/MTG and even motor cortex. Finally, this integration is flexible: the temporal synchrony between the iconic gesture and the speech segment, as well as the perceived communicative intent of the speaker, modulate the integration process. Whether these findings are special to gestures or are shared with actions or other visual accompaniments to speech (e.g. lips) or other visual symbols such as pictures are discussed, as well as the implications for a multimodal view of language.


2009 ◽  
Vol 21 (11) ◽  
pp. 2085-2099 ◽  
Author(s):  
Cathelijne M. J. Y. Tesink ◽  
Karl Magnus Petersson ◽  
Jos J. A. van Berkum ◽  
Daniëlle van den Brink ◽  
Jan K. Buitelaar ◽  
...  

When interpreting a message, a listener takes into account several sources of linguistic and extralinguistic information. Here we focused on one particular form of extralinguistic information, certain speaker characteristics as conveyed by the voice. Using functional magnetic resonance imaging, we examined the neural structures involved in the unification of sentence meaning and voice-based inferences about the speaker's age, sex, or social background. We found enhanced activation in the inferior frontal gyrus bilaterally (BA 45/47) during listening to sentences whose meaning was incongruent with inferred speaker characteristics. Furthermore, our results showed an overlap in brain regions involved in unification of speaker-related information and those used for the unification of semantic and world knowledge information [inferior frontal gyrus bilaterally (BA 45/47) and left middle temporal gyrus (BA 21)]. These findings provide evidence for a shared neural unification system for linguistic and extralinguistic sources of information and extend the existing knowledge about the role of inferior frontal cortex as a crucial component for unification during language comprehension.


2012 ◽  
Vol 24 (8) ◽  
pp. 1766-1778 ◽  
Author(s):  
Maya Visser ◽  
Elizabeth Jefferies ◽  
Karl V. Embleton ◽  
Matthew A. Lambon Ralph

Most contemporary theories of semantic memory assume that concepts are formed from the distillation of information arising in distinct sensory and verbal modalities. The neural basis of this distillation or convergence of information was the focus of this study. Specifically, we explored two commonly posed hypotheses: (a) that the human middle temporal gyrus (MTG) provides a crucial semantic interface given the fact that it interposes auditory and visual processing streams and (b) that the anterior temporal region—especially its ventral surface (vATL)—provides a critical region for the multimodal integration of information. By utilizing distortion-corrected fMRI and an established semantic association assessment (commonly used in neuropsychological investigations), we compared the activation patterns observed for both the verbal and nonverbal versions of the same task. The results are consistent with the two hypotheses simultaneously: Both MTG and vATL are activated in common for word and picture semantic processing. Additional planned, ROI analyses show that this result follows from two principal axes of convergence in the temporal lobe: both lateral (toward MTG) and longitudinal (toward the anterior temporal lobe).


Sign in / Sign up

Export Citation Format

Share Document