scholarly journals Cortical Representations of Concrete and Abstract Concepts in Language Combine Visual and Linguistic Representations

2021 ◽  
Author(s):  
Jerry Tang ◽  
Amanda LeBel ◽  
Alexander G Huth

The human semantic system stores knowledge acquired through both perception and language. To study how semantic representations in cortex integrate perceptual and linguistic information, we created semantic word embedding spaces that combine models of visual and linguistic processing. We then used these visually-grounded semantic spaces to fit voxelwise encoding models to fMRI data collected while subjects listened to hours of narrative stories. We found that cortical regions near the visual system represent concepts by combining visual and linguistic information, while regions near the language system represent concepts using mostly linguistic information. Assessing individual representations near visual cortex, we found that more concrete concepts contain more visual information, while even abstract concepts contain some amount of visual information from associated concrete concepts. Finally we found that these visual grounding effects are localized near visual cortex, suggesting that semantic representations specifically reflect the modality of adjacent perceptual systems. Our results provide a computational account of how visual and linguistic information are combined to represent concrete and abstract concepts across cortex.

2020 ◽  
Vol 2 (1) ◽  
pp. 119-137 ◽  
Author(s):  
Markus Kiefer ◽  
Marcel Harpaintner

AbstractFor a very long time, theorizing in the cognitive sciences was dominated by the assumption that abstract concepts, which lack a perceivable referent, can only be handled by amodal or verbal linguistic representations. In the last years, however, refined grounded cognition theories emphasizing the importance of emotional and introspective information for abstract concepts, in addition to verbal associations and sensorimotor information, have received increasing support. Here, we review theoretical accounts of the structure and neural basis of conceptual memory and evaluate them in light of recent empirical evidence with regard to the processing of concrete and abstract concepts. Based on this literature review, we argue that abstract concepts should not be treated as a homogenous conceptual category, whose meaning is established by one single specific type of representation. Instead, depending on the feature composition, there are different subgroups of abstract concepts, including those with strong relations to vision or action, which are represented in the visual and motor brain systems similar to concrete concepts. The reviewed findings with regard to concrete and abstract concepts can be accommodated best by hybrid theories of conceptual representation assuming an interaction between modality-specific, multimodal and amodal hub areas.


2021 ◽  
Vol 11 (17) ◽  
pp. 8241
Author(s):  
Erhan Sezerer ◽  
Selma Tekir

Over the last few years, there has been an increase in the studies that consider experiential (visual) information by building multi-modal language models and representations. It is shown by several studies that language acquisition in humans starts with learning concrete concepts through images and then continues with learning abstract ideas through the text. In this work, the curriculum learning method is used to teach the model concrete/abstract concepts through images and their corresponding captions to accomplish multi-modal language modeling/representation. We use the BERT and Resnet-152 models on each modality and combine them using attentive pooling to perform pre-training on the newly constructed dataset, which is collected from the Wikimedia Commons based on concrete/abstract words. To show the performance of the proposed model, downstream tasks and ablation studies are performed. The contribution of this work is two-fold: A new dataset is constructed from Wikimedia Commons based on concrete/abstract words, and a new multi-modal pre-training approach based on curriculum learning is proposed. The results show that the proposed multi-modal pre-training approach contributes to the success of the model.


2016 ◽  
Vol 44 (7) ◽  
pp. 1191-1200 ◽  
Author(s):  
Liusheng Wang ◽  
Hongmei Qiu ◽  
Jianjun Yin

The abstractness effect describes the phenomenon of individuals processing abstract concepts faster and more accurately than they process concrete concepts. In this study, we explored the effects of context on how 43 college students processed words, controlling for the emotional valence of the words. The participants performed a lexical decision task in which they were shown individual abstract and concrete words, or abstract and concrete words embedded in sentences. The results showed that in the word-context condition the participants' processing of concrete concepts improved, whereas in the sentence-context condition their processing of abstract concepts improved. These findings support the embodied cognition theory of concept processing.


2015 ◽  
Vol 27 (7) ◽  
pp. 1344-1359 ◽  
Author(s):  
Sara Jahfari ◽  
Lourens Waldorp ◽  
K. Richard Ridderinkhof ◽  
H. Steven Scholte

Action selection often requires the transformation of visual information into motor plans. Preventing premature responses may entail the suppression of visual input and/or of prepared muscle activity. This study examined how the quality of visual information affects frontobasal ganglia (BG) routes associated with response selection and inhibition. Human fMRI data were collected from a stop task with visually degraded or intact face stimuli. During go trials, degraded spatial frequency information reduced the speed of information accumulation and response cautiousness. Effective connectivity analysis of the fMRI data showed action selection to emerge through the classic direct and indirect BG pathways, with inputs deriving form both prefrontal and visual regions. When stimuli were degraded, visual and prefrontal regions processing the stimulus information increased connectivity strengths toward BG, whereas regions evaluating visual scene content or response strategies reduced connectivity toward BG. Response inhibition during stop trials recruited the indirect and hyperdirect BG pathways, with input from visual and prefrontal regions. Importantly, when stimuli were nondegraded and processed fast, the optimal stop model contained additional connections from prefrontal to visual cortex. Individual differences analysis revealed that stronger prefrontal-to-visual connectivity covaried with faster inhibition times. Therefore, prefrontal-to-visual cortex connections appear to suppress the fast flow of visual input for the go task, such that the inhibition process can finish before the selection process. These results indicate response selection and inhibition within the BG to emerge through the interplay of top–down adjustments from prefrontal and bottom–up input from sensory cortex.


1998 ◽  
Vol 78 (2) ◽  
pp. 467-485 ◽  
Author(s):  
CHARLES D. GILBERT

Gilbert, Charles D. Adult Cortical Dynamics. Physiol. Rev. 78: 467–485, 1998. — There are many influences on our perception of local features. What we see is not strictly a reflection of the physical characteristics of a scene but instead is highly dependent on the processes by which our brain attempts to interpret the scene. As a result, our percepts are shaped by the context within which local features are presented, by our previous visual experiences, operating over a wide range of time scales, and by our expectation of what is before us. The substrate for these influences is likely to be found in the lateral interactions operating within individual areas of the cerebral cortex and in the feedback from higher to lower order cortical areas. Even at early stages in the visual pathway, cells are far more flexible in their functional properties than previously thought. It had long been assumed that cells in primary visual cortex had fixed properties, passing along the product of a stereotyped operation to the next stage in the visual pathway. Any plasticity dependent on visual experience was thought to be restricted to a period early in the life of the animal, the critical period. Furthermore, the assembly of contours and surfaces into unified percepts was assumed to take place at high levels in the visual pathway, whereas the receptive fields of cells in primary visual cortex represented very small windows on the visual scene. These concepts of spatial integration and plasticity have been radically modified in the past few years. The emerging view is that even at the earliest stages in the cortical processing of visual information, cells are highly mutable in their functional properties and are capable of integrating information over a much larger part of visual space than originally believed.


2020 ◽  
Author(s):  
Armand Stefan Rotaru ◽  
Gabriella Vigliocco

A number of recent models of semantics combine linguistic information, derived from text corpora, and visual information, derived from image collections, demonstrating that the resulting multimodal models are better than either of their unimodal counterparts, in accounting for behavioural data. Empirical work on semantic processing has shown that emotion also plays an important role especially for abstract concepts, however, models integrating emotion along with linguistic and visual information are lacking. Here, we first improve on visual and affective representations, derived from state-of-the-art existing models, by choosing models that best fit available human semantic data and extending the number of concepts they cover. Crucially then, we assess whether adding affective representations (obtained from a neural network model designed to predict emojis from co-occurring text) improves the model’s ability to fit semantic similarity/relatedness judgements from a purely linguistic and linguistic-visual model. We find that, given specific weights assigned to the models, adding both visual and affective representations improve performance, with visual representations providing an improvement especially for more concrete words, and affective representations improving especially the fit for more abstract words.


Author(s):  
Mark Edwards ◽  
Stephanie C. Goodhew ◽  
David R. Badcock

AbstractThe visual system uses parallel pathways to process information. However, an ongoing debate centers on the extent to which the pathways from the retina, via the Lateral Geniculate nucleus to the visual cortex, process distinct aspects of the visual scene and, if they do, can stimuli in the laboratory be used to selectively drive them. These questions are important for a number of reasons, including that some pathologies are thought to be associated with impaired functioning of one of these pathways and certain cognitive functions have been preferentially linked to specific pathways. Here we examine the two main pathways that have been the focus of this debate: the magnocellular and parvocellular pathways. Specifically, we review the results of electrophysiological and lesion studies that have investigated their properties and conclude that while there is substantial overlap in the type of information that they process, it is possible to identify aspects of visual information that are predominantly processed by either the magnocellular or parvocellular pathway. We then discuss the types of visual stimuli that can be used to preferentially drive these pathways.


2018 ◽  
Author(s):  
Maria Montefinese ◽  
Erin Michelle Buchanan ◽  
David Vinson

Models of semantic representation predict that automatic priming is determined by associative and co-occurrence relations (i.e., spreading activation accounts), or to similarity in words' semantic features (i.e., featural models). Although, these three factors are correlated in characterizing semantic representation, they seem to tap different aspects of meaning. We designed two lexical decision experiments to dissociate these three different types of meaning similarity. For unmasked primes, we observed priming only due to association strength and not the other two measures; and no evidence for differences in priming for concrete and abstract concepts. For masked primes there was no priming regardless of the semantic relation. These results challenge theoretical accounts of automatic priming. Rather, they are in line with the idea that priming may be due to participants’ controlled strategic processes. These results provide important insight about the nature of priming and how association strength, as determined from word-association norms, relates to the nature of semantic representation.


2020 ◽  
Author(s):  
Nicolò Meneghetti ◽  
Chiara Cerri ◽  
Elena Tantillo ◽  
Eleonora Vannini ◽  
Matteo Caleo ◽  
...  

AbstractGamma band is known to be involved in the encoding of visual features in the primary visual cortex (V1). Recent results in rodents V1 highlighted the presence, within a broad gamma band (BB) increasing with contrast, of a narrow gamma band (NB) peaking at ∼60 Hz suppressed by contrast and enhanced by luminance. However, the processing of visual information by the two channels still lacks a proper characterization. Here, by combining experimental analysis and modeling, we prove that the two bands are sensitive to specific thalamic inputs associated with complementary contrast ranges. We recorded local field potentials from V1 of awake mice during the presentation of gratings and observed that NB power progressively decreased from low to intermediate levels of contrast. Conversely, BB power was insensitive to low levels of contrast but it progressively increased going from intermediate to high levels of contrast. Moreover, BB response was stronger immediately after contrast reversal, while the opposite held for NB. All the aforementioned dynamics were accurately reproduced by a recurrent excitatory-inhibitory leaky integrate-and-fire network, mimicking layer IV of mouse V1, provided that the sustained and periodic component of the thalamic input were modulated over complementary contrast ranges. These results shed new light on the origin and function of the two V1 gamma bands. In addition, here we propose a simple and effective model of response to visual contrast that might help in reconstructing network dysfunction underlying pathological alterations of visual information processing.Significance StatementGamma band is a ubiquitous hallmark of cortical processing of sensory stimuli. Experimental evidence shows that in the mouse visual cortex two types of gamma activity are differentially modulated by contrast: a narrow band (NB), that seems to be rodent specific, and a standard broad band (BB), observed also in other animal models.We found that narrow band correlates and broad band anticorrelates with visual contrast in two complementary contrast ranges (low and high respectively). Moreover, BB displayed an earlier response than NB. A thalamocortical spiking neuron network model reproduced the aforementioned results, suggesting they might be due to the presence of two complementary but distinct components of the thalamic input into visual cortical circuitry.


2018 ◽  
Author(s):  
Theo Marins ◽  
Maite Russo ◽  
Erika Rodrigues ◽  
jorge Moll ◽  
Daniel Felix ◽  
...  

ABSTRACTEvidence of cross-modal plasticity in blind individuals has been reported over the past decades showing that non-visual information is carried and processed by classical “visual” brain structures. This feature of the blind brain makes it a pivotal model to explore the limits and mechanisms of brain plasticity. However, despite recent efforts, the structural underpinnings that could explain cross-modal plasticity in congenitally blind individuals remain unclear. Using advanced neuroimaging techniques, we mapped the thalamocortical connectivity and assessed cortical thickness and integrity of white matter of congenitally blind individuals and sighted controls to test the hypothesis that aberrant thalamocortical pattern of connectivity can pave the way for cross-modal plasticity. We described a direct occipital takeover by the temporal projections from the thalamus, which would carry non-visual information (e.g. auditory) to the visual cortex in congenitally blinds. In addition, the amount of thalamo-occipital connectivity correlated with the cortical thickness of primary visual cortex (V1), supporting a probably common (or related) reorganization phenomena. Our results suggest that aberrant thalamocortical connectivity as one possible mechanism of cross-modal plasticity in blinds, with potential impact on cortical thickness of V1.SIGNIFICANT STATEMENTCongenitally blind individuals often develop greater abilities on spared sensory modalities, such as increased acuity in auditory discrimination and voice recognition, when compared to sighted controls. These functional gains have been shown to rely on ‘visual’ cortical areas of the blind brain, characterizing the phenomenon of cross-modal plasticity. However, its anatomical underpinnings in humans have been unsuccessfully pursued for decades. Recent advances of non-invasive neuroimaging techniques allowed us to test the hypothesis of abnormal thalamocortical connectivity in congenitally blinds. Our results showed an expansion of the thalamic connections to the temporal cortex over those that project to the occipital cortex, which may explain, the cross-talk between the visual and auditory systems in congenitally blind individuals.


Sign in / Sign up

Export Citation Format

Share Document