scholarly journals Neural bases of phonological and semantic processing in early childhood

2019 ◽  
Author(s):  
Avantika Mathur ◽  
Douglas Schultz ◽  
Yingying Wang

AbstractDuring the early period of reading development, children gain phonological (letter-to-sound mapping) and semantic knowledge (storage and retrieval of word meaning). Their reading ability changes rapidly, accompanied by their learning-induced brain plasticity as they learn to read. This study aims to identify the specialization of phonological and semantic processing in early childhood using a combination of univariate and multivariate pattern analysis. Nineteen typically developing children between the age of five to seven performed visual word-level phonological (rhyming) and semantic (related meaning) judgment tasks during functional magnetic resonance imaging (fMRI) scans. Our multivariate analysis showed that young children with good reading ability have already recruited the left hemispheric regions in the brain for phonological processing, including the inferior frontal gyrus (IFG), superior and middle temporal gyrus, and fusiform gyrus. Additionally, our multivariate results suggested that the sub-regions of the left IFG were specialized for different tasks. Our results suggest the left lateralization of fronto-temporal regions for phonological processing and bilateral activations of parietal regions for semantic processing during early childhood. Our findings indicate that the neural bases of reading have already begun to be shaped in early childhood for typically developing children, which can be used as a control baseline for comparison of children at-risk for reading difficulties.

Author(s):  
Elizabeth Jefferies ◽  
Xiuyi Wang

Semantic processing is a defining feature of human cognition, central not only to language, but also to object recognition, the generation of appropriate actions, and the capacity to use knowledge in reasoning, planning, and problem-solving. Semantic memory refers to our repository of conceptual or factual knowledge about the world. This semantic knowledge base is typically viewed as including “general knowledge” as well as schematic representations of objects and events distilled from multiple experiences and retrieved independently from their original spatial or temporal context. Semantic cognition refers to our ability to flexibly use this knowledge to produce appropriate thoughts and behaviors. Semantic cognition includes at least two interactive components: a long-term store of semantic knowledge and semantic control processes, each supported by a different network. Conceptual representations are organized according to the semantic relationships between items, with different theories proposing different key organizational principles, including sensory versus functional features, domain-specific theory, embodied distributed concepts, and hub-and-spoke theory, in which distributed features are integrated within a heteromodal hub in the anterior temporal lobes. The activity within the network for semantic representation must often be controlled to ensure that the system generates representations and inferences that are suited to the immediate task or context. Semantic control is thought to include both controlled retrieval processes, in which knowledge relevant to the goal or context is accessed in a top-down manner when automatic retrieval is insufficient for the task, and post-retrieval selection to resolve competition between simultaneously active representations. Control of semantic retrieval is supported by a strongly left-lateralized brain network, which partially overlaps with the bilateral network that supports domain-general control, but extends beyond these sites to include regions not typically associated with executive control, including anterior inferior frontal gyrus and posterior middle temporal gyrus. The interaction of semantic control processes with conceptual representations allows meaningful thoughts and behavior to emerge, even when the context requires non-dominant features of the concept to be brought to the fore.


2020 ◽  
pp. 026461962091525
Author(s):  
Jonathan Waddington ◽  
Jade S Pickering ◽  
Timothy Hodgson

Five table-top tasks were developed to test the visual search ability of children and young people in a real-world context, and to assess the transfer of training-related improvements in visual search on computerised tasks to real-world activities. Each task involved searching for a set of target objects among distracting objects on a table-top. Performance on the Table-top Visual Search Ability Test for Children (TVSAT-C) was measured as the time spent searching for targets divided by the number of targets found. A total of 108 typically developing children (3–11 years old) and eight children with vision impairment (7–12 years old) participated in the study. A significant correlation was found between log-transformed age and log-transformed performance ( R2 = .65, p = 4 × 10−26) in our normative sample, indicating a monomial power law relationship between age and performance with an exponent of [Formula: see text], [Formula: see text] We calculated age-dependent percentiles and receiver operating characteristic curve analysis indicated the third percentile as the optimal cut-off for detecting a visual search deficit, giving a specificity of [Formula: see text], [Formula: see text] and sensitivity of [Formula: see text], [Formula: see text] for the test. Further studies are required to calculate measures of reliability and external validity, to confirm sensitivity for visual search deficits, and to investigate the most appropriate response modes for participants with conditions that affect manual dexterity. In addition, more work is needed to assess construct validity where semantic knowledge is required that younger children may not have experience with. We have made the protocol and age-dependent normative data available for those interested in using the test in research or practice, and to illustrate the smooth developmental trajectory of visual search ability during childhood.


2015 ◽  
Vol 122 (2) ◽  
pp. 250-261 ◽  
Author(s):  
Edward F. Chang ◽  
Kunal P. Raygor ◽  
Mitchel S. Berger

Classic models of language organization posited that separate motor and sensory language foci existed in the inferior frontal gyrus (Broca's area) and superior temporal gyrus (Wernicke's area), respectively, and that connections between these sites (arcuate fasciculus) allowed for auditory-motor interaction. These theories have predominated for more than a century, but advances in neuroimaging and stimulation mapping have provided a more detailed description of the functional neuroanatomy of language. New insights have shaped modern network-based models of speech processing composed of parallel and interconnected streams involving both cortical and subcortical areas. Recent models emphasize processing in “dorsal” and “ventral” pathways, mediating phonological and semantic processing, respectively. Phonological processing occurs along a dorsal pathway, from the posterosuperior temporal to the inferior frontal cortices. On the other hand, semantic information is carried in a ventral pathway that runs from the temporal pole to the basal occipitotemporal cortex, with anterior connections. Functional MRI has poor positive predictive value in determining critical language sites and should only be used as an adjunct for preoperative planning. Cortical and subcortical mapping should be used to define functional resection boundaries in eloquent areas and remains the clinical gold standard. In tracing the historical advancements in our understanding of speech processing, the authors hope to not only provide practicing neurosurgeons with additional information that will aid in surgical planning and prevent postoperative morbidity, but also underscore the fact that neurosurgeons are in a unique position to further advance our understanding of the anatomy and functional organization of language.


2019 ◽  
Author(s):  
Jonathan Waddington ◽  
Jade Pickering ◽  
Timothy Hodgson

AbstractFive table-top tasks were developed to test the visual search ability of children and young people in a real-world context, and to assess the transfer of training related improvements in visual search on computerised tasks to real-world activities. Each task involved searching for a set of target objects among distracting objects on a table-top. Performance on the Table-top Visual Search Ability Test for Children (TVSAT-C) was measured as the time spent searching for targets divided by the number of targets found. 108 typically developing children (3-11 years old) and 8 children with vision impairment (7-12 years old) participated in the study. A significant correlation was found between log-transformed age and log-transformed performance (R2 = 0.65, p = 4 × 10−26) in our normative sample, indicating a monomial power law relationship between age and performance with an exponent of −1.67, 95% CI [−1.90, −1.43]. We calculated age-dependent percentiles and receiver operating characteristic curve analysis indicated the 3rd percentile as the optimal cut-off for detecting a visual search deficit, giving a specificity of 97.2%, 95% CI [92.2%, 99.1%] and sensitivity of 87.5%, 95% CI [52.9%, 97.8%] for the test. Further studies are required to calculate measures of reliability and external validity, to confirm sensitivity for visual search deficits, and to investigate the most appropriate response modes for participants with conditions that affect manual dexterity. Additionally, more work is needed to assess construct validity where semantic knowledge is required that younger children may not have experience with. We have made the protocol and age-dependent normative data available for those interested in using the test in research or practice, and to illustrate the smooth developmental trajectory of visual search ability during childhood.


2012 ◽  
Vol 40 (1) ◽  
pp. 221-243 ◽  
Author(s):  
SILVANA E. MENGONI ◽  
HANNAH NASH ◽  
CHARLES HULME

ABSTRACTChildren with Down syndrome typically have weaknesses in oral language, but it has been suggested that this domain may benefit from learning to read. Amongst oral language skills, vocabulary is a relative strength, although there is some evidence of difficulties in learning the phonological form of spoken words. This study investigated the effect of orthographic support on spoken word learning with seventeen children with Down syndrome aged seven to sixteen years and twenty-seven typically developing children aged five to seven years matched for reading ability. Ten spoken nonwords were paired with novel pictures; for half the nonwords the written form was also present. The spoken word learning of both groups did not differ and benefited to the same extent from the presence of the written word. This suggests that compared to reading-matched typically developing children, children with Down syndrome are not specifically impaired in phonological learning and benefit equally from orthographic support.


2020 ◽  
Vol 10 (5) ◽  
pp. 212-223
Author(s):  
Avantika Mathur ◽  
Douglas Schultz ◽  
Yingying Wang

2013 ◽  
Vol 41 (6) ◽  
pp. 1224-1248 ◽  
Author(s):  
CRISTINA MCKEAN ◽  
CAROLYN LETTS ◽  
DAVID HOWARD

ABSTRACTThe effect of phonotactic probability (PP) and neighbourhood density (ND) on triggering word learning was examined in children with Language Impairment (3;04–6;09) and compared to Typically Developing children. Nonwords, varying PP and ND orthogonally, were presented in a story context and their learning tested using a referent identification task. Group comparisons with receptive vocabulary as a covariate found no group differences in overall scores or in the influence of PP or ND. Therefore, there was no evidence of atypical lexical or phonological processing. ‘Convergent’ PP/ND (High PP/High ND; Low PP/Low ND) was optimal for word learning in both groups. This bias interacted with vocabulary knowledge. ‘Divergent’ PP/ND word scores (High PP/Low ND; Low PP/High ND) were positively correlated with vocabulary so the ‘divergence disadvantage’ reduced as vocabulary knowledge grew; an interaction hypothesized to represent developmental changes in lexical–phonological processing linked to the emergence of phonological representations.


2012 ◽  
Vol 24 (1) ◽  
pp. 133-147 ◽  
Author(s):  
Carin Whitney ◽  
Marie Kirk ◽  
Jamie O'Sullivan ◽  
Matthew A. Lambon Ralph ◽  
Elizabeth Jefferies

To understand the meanings of words and objects, we need to have knowledge about these items themselves plus executive mechanisms that compute and manipulate semantic information in a task-appropriate way. The neural basis for semantic control remains controversial. Neuroimaging studies have focused on the role of the left inferior frontal gyrus (LIFG), whereas neuropsychological research suggests that damage to a widely distributed network elicits impairments of semantic control. There is also debate about the relationship between semantic and executive control more widely. We used TMS in healthy human volunteers to create “virtual lesions” in structures typically damaged in patients with semantic control deficits: LIFG, left posterior middle temporal gyrus (pMTG), and intraparietal sulcus (IPS). The influence of TMS on tasks varying in semantic and nonsemantic control demands was examined for each region within this hypothesized network to gain insights into (i) their functional specialization (i.e., involvement in semantic representation, controlled retrieval, or selection) and (ii) their domain dependence (i.e., semantic or cognitive control). The results revealed that LIFG and pMTG jointly support both the controlled retrieval and selection of semantic knowledge. IPS specifically participates in semantic selection and responds to manipulations of nonsemantic control demands. These observations are consistent with a large-scale semantic control network, as predicted by lesion data, that draws on semantic-specific (LIFG and pMTG) and domain-independent executive components (IPS).


2015 ◽  
Vol 112 (28) ◽  
pp. E3719-E3728 ◽  
Author(s):  
Paul Hoffman ◽  
Matthew A. Lambon Ralph ◽  
Anna M. Woollams

The goal of cognitive neuroscience is to integrate cognitive models with knowledge about underlying neural machinery. This significant challenge was explored in relation to word reading, where sophisticated computational-cognitive models exist but have made limited contact with neural data. Using distortion-corrected functional MRI and dynamic causal modeling, we investigated the interactions between brain regions dedicated to orthographic, semantic, and phonological processing while participants read words aloud. We found that the lateral anterior temporal lobe exhibited increased activation when participants read words with irregular spellings. This area is implicated in semantic processing but has not previously been considered part of the reading network. We also found meaningful individual differences in the activation of this region: Activity was predicted by an independent measure of the degree to which participants use semantic knowledge to read. These characteristics are predicted by the connectionist Triangle Model of reading and indicate a key role for semantic knowledge in reading aloud. Premotor regions associated with phonological processing displayed the reverse characteristics. Changes in the functional connectivity of the reading network during irregular word reading also were consistent with semantic recruitment. These data support the view that reading aloud is underpinned by the joint operation of two neural pathways. They reveal that (i) the ATL is an important element of the ventral semantic pathway and (ii) the division of labor between the two routes varies according to both the properties of the words being read and individual differences in the degree to which participants rely on each route.


Sign in / Sign up

Export Citation Format

Share Document