scholarly journals Neurocognitive Mechanisms Supporting the Generalization of Concepts Across Languages

2019 ◽  
Author(s):  
Usman Ayub Sheikh ◽  
Manuel Carreiras ◽  
David Soto

The neurocognitive mechanisms that support the generalization of semantic representations across different languages remain to be determined. Current psycholinguistic models propose that semantic representations are likely to overlap across languages, although there is evidence also to the contrary. Neuroimaging studies observed that brain activity patterns associated with the meaning of words may be similar across languages. However, the factors that mediate cross-language generalization of semantic representations are not known. We here identify a key factor: the depth of processing. Human participants were asked to process visual words as they underwent functional MRI. We found that, during shallow processing, multivariate pattern classifiers could decode the word semantic category within each language in putative substrates of the semantic network, but there was no evidence of cross-language generalization in the shallow processing context. By contrast, when the depth of processing was higher, significant cross-language generalization was observed in several regions, including inferior parietal, ventromedial, lateral temporal, and inferior frontal cortex. These results support the distributed-only view of semantic processing and favour models based on multiple semantic hubs. The results also have ramifications for psycholinguistic models of word processing such as the BIA+, which by default assumes non-selective access to both native and second languages.

2019 ◽  
Author(s):  
David Soto ◽  
Usman Ayub Sheikh ◽  
Ning Mei ◽  
Roberto Santana

AbstractHow the brain representation of conceptual knowledge vary as a function of processing goals, strategies and task-factors remains a key unresolved question in cognitive neuroscience. Here we asked how the brain representation of semantic categories is shaped by the depth of processing during mental simulation. Participants were presented with visual words during functional magnetic resonance imaging (fMRI). During shallow processing, participants had to read the items. During deep processing, they had to mentally simulate the features associated with the words. Multivariate classification, informational connectivity and encoding models were used to reveal how the depth of processing determines the brain representation of word meaning. Decoding accuracy in putative substrates of the semantic network was enhanced when the depth processing was high, and the brain representations were more generalizable in semantic space relative to shallow processing contexts. This pattern was observed even in association areas in inferior frontal and parietal cortex. Deep information processing during mental simulation also increased the informational connectivity within key substrates of the semantic network. To further examine the properties of the words encoded in brain activity, we compared computer vision models - associated with the image referents of the words - and word embedding. Computer vision models explained more variance of the brain responses across multiple areas of the semantic network. These results indicate that the brain representation of word meaning is highly malleable by the depth of processing imposed by the task, relies on access to visual representations and is highly distributed, including prefrontal areas previously implicated in semantic control.


2020 ◽  
Author(s):  
Andrea Nadalini ◽  
Roberto Bottini ◽  
Daniel Casasanto ◽  
Davide Crepaldi

Do supraliminal and subliminal priming capture different facets of words’ semantic representations? We used metaphorical priming between space and time as a test bed for this question. While people conceptualize time along the lateral and sagittal axes, only the latter mapping comes up in language (the future is in front of you, not to your right). We assessed facilitation on temporal target words by lateral (left, right) and sagittal (back, front) primes, in masked and overt conditions. Supraliminally, we observe similar sagittal and lateral priming, while the masked effect is stronger on the sagittal axis, and weak to non–existent on the lateral one. These results are observed in an original and a replication studies; and are strongly confirmed by a Bayesian meta–analysis of the two. We conclude that unconscious word processing is limited to linguistically–encoded information, while consciousness may be needed to fully activate semantic representations.


2020 ◽  
Vol 16 (1) ◽  
Author(s):  
Beatriz Bermúdez-Margaretto ◽  
Yury Shtyrov ◽  
David Beltrán ◽  
Fernando Cuetos ◽  
Alberto Domínguez

Abstract Background Novel word acquisition is generally believed to be a rapid process, essential for ensuring a flexible and efficient communication system; at least in spoken language, learners are able to construct memory traces for new linguistic stimuli after just a few exposures. However, such rapid word learning has not been systematically found in visual domain, with different confounding factors obscuring the orthographic learning of novel words. This study explored the changes in human brain activity occurring online, during a brief training with novel written word-forms using a silent reading task Results Single-trial, cluster-based random permutation analysis revealed that training caused an extremely fast (after just one repetition) and stable facilitation in novel word processing, reflected in the modulation of P200 and N400 components, possibly indicating rapid dynamics at early and late stages of the lexical processing. Furthermore, neural source estimation of these effects revealed the recruitment of brain areas involved in orthographic and lexico-semantic processing, respectively. Conclusions These results suggest the formation of neural memory traces for novel written word-forms after a minimal exposure to them even in the absence of a semantic reference, resembling the rapid learning processes known to occur in spoken language.


2018 ◽  
Author(s):  
Markus Ostarek ◽  
Jeroen van Paridon ◽  
Falk Huettig

AbstractProcessing words with referents that are typically observed up or down in space (up/down words) influences the subsequent identification of visual targets in congruent locations. Eye-tracking studies have shown that up/down word comprehension shortens launch times of subsequent saccades to congruent locations and modulates concurrent saccade trajectories. This can be explained by a task-dependent interaction of semantic processing and oculomotor programs or by a direct recruitment of direction-specific processes in oculomotor and spatial systems as part of semantic processing. To test the latter possibility, we conducted a functional magnetic resonance imaging experiment and used multi-voxel pattern analysis to assess 1) whether the typical location of word referents can be decoded from the fronto-parietal spatial network and 2) whether activity patterns are shared between up/down words and up/down saccadic eye movements. In line with these hypotheses, significant decoding of up vs. down words and cross-decoding between up/down saccades and up/down words were observed in the frontal eye field region in the superior frontal sulcus and the inferior parietal lobule. Beyond these spatial attention areas, typical location of word referents could be decoded from a set of occipital, temporal, and frontal areas, indicating that interactions between high-level regions typically implicated with lexical-semantic processing and spatial/oculomotor regions constitute the neural basis for access to spatial aspects of word meanings.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Meir Meshulam ◽  
Liat Hasenfratz ◽  
Hanna Hillman ◽  
Yun-Fei Liu ◽  
Mai Nguyen ◽  
...  

AbstractDespite major advances in measuring human brain activity during and after educational experiences, it is unclear how learners internalize new content, especially in real-life and online settings. In this work, we introduce a neural approach to predicting and assessing learning outcomes in a real-life setting. Our approach hinges on the idea that successful learning involves forming the right set of neural representations, which are captured in canonical activity patterns shared across individuals. Specifically, we hypothesized that learning is mirrored in neural alignment: the degree to which an individual learner’s neural representations match those of experts, as well as those of other learners. We tested this hypothesis in a longitudinal functional MRI study that regularly scanned college students enrolled in an introduction to computer science course. We additionally scanned graduate student experts in computer science. We show that alignment among students successfully predicts overall performance in a final exam. Furthermore, within individual students, we find better learning outcomes for concepts that evoke better alignment with experts and with other students, revealing neural patterns associated with specific learned concepts in individuals.


Information ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 226
Author(s):  
Lisa-Marie Vortmann ◽  
Leonid Schwenke ◽  
Felix Putze

Augmented reality is the fusion of virtual components and our real surroundings. The simultaneous visibility of generated and natural objects often requires users to direct their selective attention to a specific target that is either real or virtual. In this study, we investigated whether this target is real or virtual by using machine learning techniques to classify electroencephalographic (EEG) and eye tracking data collected in augmented reality scenarios. A shallow convolutional neural net classified 3 second EEG data windows from 20 participants in a person-dependent manner with an average accuracy above 70% if the testing data and training data came from different trials. This accuracy could be significantly increased to 77% using a multimodal late fusion approach that included the recorded eye tracking data. Person-independent EEG classification was possible above chance level for 6 out of 20 participants. Thus, the reliability of such a brain–computer interface is high enough for it to be treated as a useful input mechanism for augmented reality applications.


2000 ◽  
Vol 12 (4) ◽  
pp. 622-634 ◽  
Author(s):  
Matti Laine ◽  
Riitta Salmelin ◽  
Päivi Helenius ◽  
Reijo Marttila

Magnetoencephalographic (MEG) changes in cortical activity were studied in a chronic Finnish-speaking deep dyslexic patient during single-word and sentence reading. It has been hypothesized that in deep dyslexia, written word recognition and its lexical-semantic analysis are subserved by the intact right hemisphere. However, in our patient, as well as in most nonimpaired readers, lexical-semantic processing as measured by sentence-final semantic-incongruency detection was related to the left superior-temporal cortex activation. Activations around this same cortical area could be identified in single-word reading as well. Another factor relevant to deep dyslexic reading, the morphological complexity of the presented words, was also studied. The effect of morphology was observed only during the preparation for oral output. By performing repeated recordings 1 year apart, we were able to document significant variability in both the spontaneous activity and the evoked responses in the lesioned left hemisphere even though at the behavioural level, the patient's performance was stable. The observed variability emphasizes the importance of estimating consistency of brain activity both within and between measurements in brain-damaged individuals.


2011 ◽  
Vol 228 (2) ◽  
pp. 200-205 ◽  
Author(s):  
Naim Haddad ◽  
Rathinaswamy B. Govindan ◽  
Srinivasan Vairavan ◽  
Eric Siegel ◽  
Jessica Temple ◽  
...  

Neuroreport ◽  
2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Yan Tong ◽  
Xin Huang ◽  
Chen-Xing Qi ◽  
Yin Shen

Sign in / Sign up

Export Citation Format

Share Document