Semantic and phonological processing in illiteracy

2004 ◽  
Vol 10 (6) ◽  
pp. 818-827 ◽  
Author(s):  
MARY H. KOSMIDIS ◽  
KYRANA TSAPKINI ◽  
VASILIKI FOLIA ◽  
CHRISTINA H. VLAHOU ◽  
GRIGORIS KIOSSEOGLOU

Researchers of cognitive processing in illiteracy have proposed that the acquisition of literacy modifies the functional organization of the brain. They have suggested that, while illiterate individuals have access only to innate semantic processing skills, those who have learned the correspondence between graphemes and phonemes have several mechanisms available to them through which to process oral language. We conducted 2 experiments to verify that suggestion with respect to language processing, and to elucidate further the differences between literate and illiterate individuals in the cognitive strategies used to process oral language, as well as hemispheric specialization for these processes. Our findings suggest that semantic processing strategies are qualitatively the same in literates and illiterates, despite the fact that overall performance is augmented by increased education. In contrast, explicit processing of oral information based on phonological characteristics appears to be qualitatively different between literates and illiterates: effective strategies in the processing of phonological information depend upon having had a formal education, regardless of the level of education. We also confirmed the differential abilities needed for the processing of semantic and phonological information and related them to hemisphere-specific processing. (JINS, 2004,10, 818–827.)

2015 ◽  
Vol 122 (2) ◽  
pp. 250-261 ◽  
Author(s):  
Edward F. Chang ◽  
Kunal P. Raygor ◽  
Mitchel S. Berger

Classic models of language organization posited that separate motor and sensory language foci existed in the inferior frontal gyrus (Broca's area) and superior temporal gyrus (Wernicke's area), respectively, and that connections between these sites (arcuate fasciculus) allowed for auditory-motor interaction. These theories have predominated for more than a century, but advances in neuroimaging and stimulation mapping have provided a more detailed description of the functional neuroanatomy of language. New insights have shaped modern network-based models of speech processing composed of parallel and interconnected streams involving both cortical and subcortical areas. Recent models emphasize processing in “dorsal” and “ventral” pathways, mediating phonological and semantic processing, respectively. Phonological processing occurs along a dorsal pathway, from the posterosuperior temporal to the inferior frontal cortices. On the other hand, semantic information is carried in a ventral pathway that runs from the temporal pole to the basal occipitotemporal cortex, with anterior connections. Functional MRI has poor positive predictive value in determining critical language sites and should only be used as an adjunct for preoperative planning. Cortical and subcortical mapping should be used to define functional resection boundaries in eloquent areas and remains the clinical gold standard. In tracing the historical advancements in our understanding of speech processing, the authors hope to not only provide practicing neurosurgeons with additional information that will aid in surgical planning and prevent postoperative morbidity, but also underscore the fact that neurosurgeons are in a unique position to further advance our understanding of the anatomy and functional organization of language.


2019 ◽  
Vol 5 (1) ◽  
pp. 131-150 ◽  
Author(s):  
Alan C.L. Yu ◽  
Georgia Zellou

Individual variation is ubiquitous and empirically observable in most phonological behaviors, yet relatively few studies aim to capture the heterogeneity of language processing among individuals, as opposed to those focusing primarily on group-level patterns. The study of individual differences can shed light on the nature of the cognitive representations and mechanisms involved in phonological processing. To guide our review of individual variation in the processing of phonological information, we consider studies that can illuminate broader issues in the field, such as the nature of linguistic representations and processes. We also consider how the study of individual differences can provide insight into long-standing issues in linguistic variation and change. Since linguistic communities are made up of individuals, the questions raised by examining individual differences in linguistic processing are relevant to those who study all aspects of language.


2003 ◽  
Vol 15 (5) ◽  
pp. 718-730 ◽  
Author(s):  
David P. Corina ◽  
Lucila San Jose-Robertson ◽  
Andre Guillemin ◽  
Julia High ◽  
Allen R. Braun

Unlike spoken languages, sign languages of the deaf make use of two primary articulators, the right and left hands, to produce signs. This situation has no obvious parallel in spoken languages, in which speech articulation is carried out by symmetrical unitary midline vocal structures. This arrangement affords a unique opportunity to examine the robustness of linguistic systems that underlie language production in the face of contrasting articulatory demands and to chart the differential effects of handedness for highly skilled movements. Positron emission tomography (PET) technique was used to examine brain activation in 16 deaf users of American Sign Language (ASL) while subjects generated verb signs independently with their right dominant and left nondominant hands (compared to the repetition of noun signs). Nearly identical patterns of left inferior frontal and right cerebellum activity were observed. This pattern of activation during signing is consistent with patterns that have been reported for spoken languages including evidence for specializations of inferior frontal regions related to lexical–semantic processing, search and retrieval, and phonological encoding. These results indicate that lexical–semantic processing in production relies upon left-hemisphere regions regardless of the modality in which a language is realized, and that this left-hemisphere activation is stable, even in the face of conflicting articulatory demands. In addition, these data provide evidence for the role of the right posterolateral cerebellum in linguistic–cognitive processing and evidence of a left ventral fusiform contribution to sign language processing


Author(s):  
Hans-Jörg Schmid

The chapter discusses the cognitive activities which are performed in usage events and entrenched, if repeated. The key cognitive activity is association in the associative network, with four types of associations (symbolic, paradigmatic, syntagmatic, paradigmatic) being activated in predictive and probabilistic lexical and syntactic processing. Processing and representation take place in the form of entrenched patterns of associations. Language processing is explained in terms of the activation of associations. This activation is probabilistic and follows the principle of predictive coding. Lexical-semantic processing is understood in terms of dynamic and transient multidimensional activation patterns in the associative network targeting attractors in the network. A highly dynamic and flexible associative model of syntactic processing is proposed. It is first developed with reference to two examples and then described in general form. The model is very important for the understanding of entrenchment to be discussed in Part III.


2019 ◽  
Author(s):  
Guangting Mai ◽  
William S-Y. Wang

AbstractNeural entrainment of acoustic envelopes is important for speech intelligibility in spoken language processing. However, it is unclear how it contributes to processing at different linguistic hierarchical levels. The present EEG study investigated this issue when participants responded to stimuli that dissociated phonological and semantic processing (real-word, pseudo-word and backward utterances). Multivariate Temporal Response Function (mTRF) model was adopted to map speech envelopes from multiple spectral bands onto EEG signals, providing a direct approach to measure neural entrainment. We tested the hypothesis that entrainment at delta (supra-syllabic) and theta (syllabic and sub-syllabic) bands take distinct roles at different hierarchical levels. Results showed that both types of entrainment involve speech-specific processing, but their underlying mechanisms were different. Theta-band entrainment was modulated by phonological but not semantic contents, reflecting the possible mechanism of tracking syllabic- and sub-syllabic patterns during phonological processing. Delta-band entrainment, on the other hand, was modulated by semantic information, indexing more attention-demanding, effortful phonological encoding when higher-level (semantic) information is deficient. Interestingly, we further demonstrated that the statistical capacity of mTRFs at the delta band and theta band to classify utterances is affected by their semantic (real-word vs. pseudo-word) and phonological (real-word and pseudo-word vs. backward) contents, respectively. Moreover, analyses on the response weighting of mTRFs showed that delta-band entrainment sustained across neural processing stages up to higher-order timescales (~ 300 ms), while theta-band entrainment occurred mainly at early, perceptual processing stages (< 160 ms). This indicates that, compared to theta-band entrainment, delta-band entrainment may reflect increased involvement of higher-order cognitive functions during interactions between phonological and semantic processing. As such, we conclude that neural entrainment is not only associated with speech intelligibility, but also with the hierarchy of linguistic (phonological and semantic) content. The present study thus provide a new insight into cognitive mechanisms of neural entrainment for spoken language processing.HighlightsLow-frequency neural entrainment was examined via mTRF models in EEG during phonological and semantic processing.Delta entrainment take roles in effortful listening for phonological recognitionTheta entrainment take roles in tracking syllabic and subsyllabic patterns for phonological processingDelta and theta entrainment sustain at different timescales of neural processing


Author(s):  
Jennifer M. Roche ◽  
Arkady Zgonnikov ◽  
Laura M. Morett

Purpose The purpose of the current study was to evaluate the social and cognitive underpinnings of miscommunication during an interactive listening task. Method An eye and computer mouse–tracking visual-world paradigm was used to investigate how a listener's cognitive effort (local and global) and decision-making processes were affected by a speaker's use of ambiguity that led to a miscommunication. Results Experiments 1 and 2 found that an environmental cue that made a miscommunication more or less salient impacted listener language processing effort (eye-tracking). Experiment 2 also indicated that listeners may develop different processing heuristics dependent upon the speaker's use of ambiguity that led to a miscommunication, exerting a significant impact on cognition and decision making. We also found that perspective-taking effort and decision-making complexity metrics (computer mouse tracking) predict language processing effort, indicating that instances of miscommunication produced cognitive consequences of indecision, thinking, and cognitive pull. Conclusion Together, these results indicate that listeners behave both reciprocally and adaptively when miscommunications occur, but the way they respond is largely dependent upon the type of ambiguity and how often it is produced by the speaker.


2020 ◽  
Author(s):  
Kun Sun

Expectations or predictions about upcoming content play an important role during language comprehension and processing. One important aspect of recent studies of language comprehension and processing concerns the estimation of the upcoming words in a sentence or discourse. Many studies have used eye-tracking data to explore computational and cognitive models for contextual word predictions and word processing. Eye-tracking data has previously been widely explored with a view to investigating the factors that influence word prediction. However, these studies are problematic on several levels, including the stimuli, corpora, statistical tools they applied. Although various computational models have been proposed for simulating contextual word predictions, past studies usually preferred to use a single computational model. The disadvantage of this is that it often cannot give an adequate account of cognitive processing in language comprehension. To avoid these problems, this study draws upon a massive natural and coherent discourse as stimuli in collecting the data on reading time. This study trains two state-of-art computational models (surprisal and semantic (dis)similarity from word vectors by linear discriminative learning (LDL)), measuring knowledge of both the syntagmatic and paradigmatic structure of language. We develop a `dynamic approach' to compute semantic (dis)similarity. It is the first time that these two computational models have been merged. Models are evaluated using advanced statistical methods. Meanwhile, in order to test the efficiency of our approach, one recently developed cosine method of computing semantic (dis)similarity based on word vectors data adopted is used to compare with our `dynamic' approach. The two computational and fixed-effect statistical models can be used to cross-verify the findings, thus ensuring that the result is reliable. All results support that surprisal and semantic similarity are opposed in the prediction of the reading time of words although both can make good predictions. Additionally, our `dynamic' approach performs better than the popular cosine method. The findings of this study are therefore of significance with regard to acquiring a better understanding how humans process words in a real-world context and how they make predictions in language cognition and processing.


2021 ◽  
Vol 11 (3) ◽  
pp. 359
Author(s):  
Katharina Hogrefe ◽  
Georg Goldenberg ◽  
Ralf Glindemann ◽  
Madleen Klonowski ◽  
Wolfram Ziegler

Assessment of semantic processing capacities often relies on verbal tasks which are, however, sensitive to impairments at several language processing levels. Especially for persons with aphasia there is a strong need for a tool that measures semantic processing skills independent of verbal abilities. Furthermore, in order to assess a patient’s potential for using alternative means of communication in cases of severe aphasia, semantic processing should be assessed in different nonverbal conditions. The Nonverbal Semantics Test (NVST) is a tool that captures semantic processing capacities through three tasks—Semantic Sorting, Drawing, and Pantomime. The main aim of the current study was to investigate the relationship between the NVST and measures of standard neurolinguistic assessment. Fifty-one persons with aphasia caused by left hemisphere brain damage were administered the NVST as well as the Aachen Aphasia Test (AAT). A principal component analysis (PCA) was conducted across all AAT and NVST subtests. The analysis resulted in a two-factor model that captured 69% of the variance of the original data, with all linguistic tasks loading high on one factor and the NVST subtests loading high on the other. These findings suggest that nonverbal tasks assessing semantic processing capacities should be administered alongside standard neurolinguistic aphasia tests.


2021 ◽  
pp. 1-7
Author(s):  
Vasudha Hande ◽  
Shantala Hegde

BACKGROUND: A specific learning disability comes with a cluster of deficits in the neurocognitive domain. Phonological processing deficits have been the core of different types of specific learning disabilities. In addition to difficulties in phonological processing and cognitive deficits, children with specific learning disability (SLD) are known to also found have deficits in more innate non-language-based skills like musical rhythm processing. OBJECTIVES: This paper reviews studies in the area of musical rhythm perception in children with SLD. An attempt was made to throw light on beneficial effects of music and rhythm-based intervention and their underlying mechanism. METHODS: A hypothesis-driven review of research in the domain of rhythm deficits and rhythm-based intervention in children with SLD was carried out. RESULTS: A summary of the reviewed literature highlights that music and language processing have shared neural underpinnings. Children with SLD in addition to difficulties in language processing and other neurocognitive deficits are known to have deficits in music and rhythm perception. This is explained in the background of deficits in auditory skills, perceptuo-motor skills and timing skills. Attempt has been made in the field to understand the effect of music training on the children’s auditory processing and language development. Music and rhythm-based intervention emerges as a powerful intervention method to target language processing and other neurocognitive functions. Future studies in this direction are highly underscored. CONCLUSIONS: Suggestions for future research on music-based interventions have been discussed.


Author(s):  
Emme O’Rourke ◽  
Emily L. Coderre

AbstractWhile many individuals with autism spectrum disorder (ASD) experience difficulties with language processing, non-linguistic semantic processing may be intact. We examined neural responses to an implicit semantic priming task by comparing N400 responses—an event-related potential related to semantic processing—in response to semantically related or unrelated pairs of words or pictures. Adults with ASD showed larger N400 responses than typically developing adults for pictures, but no group differences occurred for words. However, we also observed complex modulations of N400 amplitude by age and by level of autistic traits. These results offer important implications for how groups are delineated and compared in autism research.


Sign in / Sign up

Export Citation Format

Share Document