lexical processing
Recently Published Documents


TOTAL DOCUMENTS

634
(FIVE YEARS 160)

H-INDEX

49
(FIVE YEARS 5)

2022 ◽  
pp. 002383092110684
Author(s):  
Julio González-Alvarez ◽  
Teresa Cervera-Crespo

The relationship between the age of acquisition (AoA) of words and their cerebral hemispheric representation is controversial because the experimental results have been contradictory. However, most of the lexical processing experiments were performed with stimuli consisting of written words. If we want to compare the processing of words learned very early in infancy—when children cannot read—with words learned later, it seems more logical to employ spoken words as experimental stimuli. This study, based on the auditory lexical decision task, used spoken words that were classified according to an objective criterion of AoA with extremely distant means (2.88 vs. 9.28 years old). As revealed by the reaction times, both early and late words were processed more efficiently in the left hemisphere, with no AoA × Hemisphere interaction. The results are discussed from a theoretical point of view, considering that all the experiments were conducted using adult participants.


2021 ◽  
Vol 9 (22) ◽  

Age of acquisition refers to the age at which a specific word is learned for the first time. Research shows that age of acquisition is a significant variable in lexical processing tasks. The age of acquisition effect emerges from how information is stored and accessed in the brain and is used to seek answers to theoretical and practical research questions about the mind. For example, studies in the clinical psychology field show that the age of acquisition effect differs in participants with brain damage or neurological disorders (e.g., Alzheimer's disease, aphasia, semantic dementia, and dyslexia) compared to participants without these disorders. Research in neuroscience shows that early and late acquired words are associated with different brain activations. Although it has a long and rich history in international literature, there are very few empirical studies on the age of acquisition effect in Turkish literature. In this review, basic findings and theories are discussed, and the subjective and objective procedures of collecting age of acquisition norms are presented comparatively. After examining the theoretical and methodological issues in the field, the application areas of the age of acquisition, including clinical psychology, neuroscience, and second language acquisition, are discussed, and suggestions for future studies are presented. Keywords: Age of acquisition, norm studies, dyslexia, neuroscience, second language acquisition


2021 ◽  
pp. 1-12
Author(s):  
William Matchin ◽  
Deniz İlkbaşaran ◽  
Marla Hatrak ◽  
Austin Roth ◽  
Agnes Villwock ◽  
...  

Abstract Areas within the left-lateralized neural network for language have been found to be sensitive to syntactic complexity in spoken and written language. Previous research has revealed that these areas are active for sign language as well, but whether these areas are specifically responsive to syntactic complexity in sign language independent of lexical processing has yet to be found. To investigate the question, we used fMRI to neuroimage deaf native signers' comprehension of 180 sign strings in American Sign Language (ASL) with a picture-probe recognition task. The ASL strings were all six signs in length but varied at three levels of syntactic complexity: sign lists, two-word sentences, and complex sentences. Syntactic complexity significantly affected comprehension and memory, both behaviorally and neurally, by facilitating accuracy and response time on the picture-probe recognition task and eliciting a left lateralized activation response pattern in anterior and posterior superior temporal sulcus (aSTS and pSTS). Minimal or absent syntactic structure reduced picture-probe recognition and elicited activation in bilateral pSTS and occipital-temporal cortex. These results provide evidence from a sign language, ASL, that the combinatorial processing of anterior STS and pSTS is supramodal in nature. The results further suggest that the neurolinguistic processing of ASL is characterized by overlapping and separable neural systems for syntactic and lexical processing.


2021 ◽  
pp. 171-184
Author(s):  
David Quinto-Pozos

In recent years, deaf and/or hard of hearing (D/HH) children with atypical signed language abilities have become the focus of attention by researchers and educators, especially clinicians in programs that focus on bilingual (signed-written/spoken) education. Studies have shown that Deaf children with a language disorder present with a myriad of linguistic challenges, including struggles with fingerspelling comprehension, complex morphology, or lexical processing. This chapter highlights methods commonly used in assessing children suspected of having a developmental signed language disorder. In addition, it outlines issues that are critical for working with D/HH children, such as considering the possible role of co-occurring disabilities (such as attention deficits and autism) and obtaining information and support from parents and educators/clinicians. Finally, the chapter outlines suggestions for researchers and clinicians working together to identify and provide intervention for children suspected of having a developmental signed language disorder.


Author(s):  
Yu-Ying Chuang ◽  
R. Harald Baayen

Naive discriminative learning (NDL) and linear discriminative learning (LDL) are simple computational algorithms for lexical learning and lexical processing. Both NDL and LDL assume that learning is discriminative, driven by prediction error, and that it is this error that calibrates the association strength between input and output representations. Both words’ forms and their meanings are represented by numeric vectors, and mappings between forms and meanings are set up. For comprehension, form vectors predict meaning vectors. For production, meaning vectors map onto form vectors. These mappings can be learned incrementally, approximating how children learn the words of their language. Alternatively, optimal mappings representing the end state of learning can be estimated. The NDL and LDL algorithms are incorporated in a computational theory of the mental lexicon, the ‘discriminative lexicon’. The model shows good performance both with respect to production and comprehension accuracy, and for predicting aspects of lexical processing, including morphological processing, across a wide range of experiments. Since, mathematically, NDL and LDL implement multivariate multiple regression, the ‘discriminative lexicon’ provides a cognitively motivated statistical modeling approach to lexical processing.


2021 ◽  
Author(s):  
Fang Wang ◽  
Quynh Trang H. Nguyen ◽  
Blair Kaneshiro ◽  
Lindsey Hasak ◽  
Angie M. Wang ◽  
...  

There are multiple levels of processing relevant to reading that vary in their visual, sublexical and lexical orthographic processing demands. Segregating distinct cortical sources for each of these levels has been challenging in EEG studies of early readers. To address this challenge, we applied recent advances in analyzing high-density EEG using Steady-State Visual Evoked Potentials (SSVEPs) via data-driven Reliable Components Analysis (RCA) in a group of early readers spanning from kindergarten to second grade. Three controlled stimulus contrasts---familiar words versus unfamiliar pseudofonts, familiar words versus orthographically legal pseudowords, and orthographically legal pseudowords versus orthographically illegal nonwords---were used to isolate visual print/letter selectivity, sublexical processing, and lexical processing, respectively. We found robust responses specific to each of these processing levels, even in kindergarteners who have limited knowledge of print. Moreover, comparing amplitudes of these three stimulus contrasts across three reading fluency-based groups and three grade-based groups revealed fluency group and grade group main effects only for lexical contrast (i.e., words versus orthographically legal pseudowords). Furthermore, we found that sublexical orthography-related responses shifted their topographic distribution from the right to left hemisphere from kindergarten to first and second grades. Results suggest that, with more sensitive measures, the sublexical and lexical fine tuning for words---as a bio-marker of reading ability---can be detected at a much earlier stage than previously assumed.


2021 ◽  
pp. 1-26
Author(s):  
Jan-Louis Kruger ◽  
Natalia Wisniewska ◽  
Sixin Liao

Abstract High subtitle speed undoubtedly impacts the viewer experience. However, little is known about how fast subtitles might impact the reading of individual words. This article presents new findings on the effect of subtitle speed on viewers’ reading behavior using word-based eye-tracking measures with specific attention to word skipping and rereading. In multimodal reading situations such as reading subtitles in video, rereading allows people to correct for oculomotor error or comprehension failure during linguistic processing or integrate words with elements of the image to build a situation model of the video. However, the opportunity to reread words, to read the majority of the words in the subtitle and to read subtitles to completion, is likely to be compromised when subtitles are too fast. Participants watched videos with subtitles at 12, 20, and 28 characters per second (cps) while their eye movements were recorded. It was found that comprehension declined as speed increased. Eye movement records also showed that faster subtitles resulted in more incomplete reading of subtitles. Furthermore, increased speed also caused fewer words to be reread following both horizontal eye movements (likely resulting in reduced lexical processing) and vertical eye movements (which would likely reduce higher-level comprehension and integration).


2021 ◽  
pp. 1-33
Author(s):  
Sixin Liao ◽  
Lili Yu ◽  
Jan-Louis Kruger ◽  
Erik D. Reichle

Abstract This study investigated how semantically relevant auditory information might affect the reading of subtitles, and if such effects might be modulated by the concurrent video content. Thirty-four native Chinese speakers with English as their second language watched video with English subtitles in six conditions defined by manipulating the nature of the audio (Chinese/L1 audio vs. English/L2 audio vs. no audio) and the presence versus absence of video content. Global eye-movement analyses showed that participants tended to rely less on subtitles with Chinese or English audio than without audio, and the effects of audio were more pronounced in the presence of video presentation. Lexical processing of subtitles was not modulated by the audio. However, Chinese audio, which presumably obviated the need to read the subtitles, resulted in more superficial post-lexical processing of the subtitles relative to either the English or no audio. On the contrary, English audio accentuated post-lexical processing of the subtitles compared with Chinese audio or no audio, indicating that participants might use English audio to support subtitle reading (or vice versa) and thus engaged in deeper processing of the subtitles. These findings suggest that, in multimodal reading situations, eye movements are not only controlled by processing difficulties associated with properties of words (e.g., their frequency and length) but also guided by metacognitive strategies involved in monitoring comprehension and its online modulation by different information sources.


2021 ◽  
Author(s):  
Fabian Klostermann ◽  
Moritz Boll ◽  
Felicitas Ehlen ◽  
Hannes Ole Tiedt

Abstract Embodied cognition theories posit direct interactions between sensorimotor and mental processing. Various clinical observations have been interpreted in this controversial framework, amongst others, low verb generation in word production tasks performed by persons with Parkinson’s disease (PD). If this were a sequel of reduced motor simulation of prevalent action semantics in this word class, reduced PD pathophysiology should result in increased verb production and a general shift of lexical contents towards particular movement-related meanings. 17 persons with PD and bilateral Deep Brain Stimulation (DBS) of the subhtalamic nucleus (STN) and 17 healthy control persons engaged in a semantically unconstrained, phonemic verbal fluency task, the former in both DBS-off and DBS-on states. The analysis referred to the number of words produced, verb use, and the occurrence of different dimensions of movement-related semantics in the lexical output. Persons with PD produced fewer words than controls. In the DBS-off, but not in the DBS-on condition, the proportion of verbs within this reduced output was lower than in controls. Lowered verb production went in parallel with a semantic shift. In persons with PD in the DBS-off, but not the DBS-on condition, the relatedness of produced words to own body-movement was lower than in controls. In persons with PD, DBS induced-changes of the motor condition appear to go along with formal and semantic shifts in word production. The results support the idea of a direct connection between the motor system and lexical processing.


Sign in / Sign up

Export Citation Format

Share Document