scholarly journals Modelling brain representations of abstract concepts

2021 ◽  
Author(s):  
Daniel Kaiser ◽  
Arthur Jacobs ◽  
Radoslaw Martin Cichy

Abstract conceptual representations are critical for human cognition. Despite their importance, key properties of these representations remain poorly understood. Here, we used computational models of distributional semantics to predict multivariate fMRI activity patterns during the activation and contextualization of abstract concepts. We devised a task in which participants had to embed abstract nouns into a story that they developed around a given background context. We found that representations in inferior parietal cortex were predicted by concept similarities emerging in models of distributional semantics. By constructing different model families, we reveal the models' learning trajectories and delineate how abstract and concrete training materials contribute to the formation of brain-like representations. These results inform theories about the format and emergence of abstract conceptual representations in the human brain.

Author(s):  
Andrew J. Anderson ◽  
Douwe Kiela ◽  
Stephen Clark ◽  
Massimo Poesio

Important advances have recently been made using computational semantic models to decode brain activity patterns associated with concepts; however, this work has almost exclusively focused on concrete nouns. How well these models extend to decoding abstract nouns is largely unknown. We address this question by applying state-of-the-art computational models to decode functional Magnetic Resonance Imaging (fMRI) activity patterns, elicited by participants reading and imagining a diverse set of both concrete and abstract nouns. One of the models we use is linguistic, exploiting the recent word2vec skipgram approach trained on Wikipedia. The second is visually grounded, using deep convolutional neural networks trained on Google Images. Dual coding theory considers concrete concepts to be encoded in the brain both linguistically and visually, and abstract concepts only linguistically. Splitting the fMRI data according to human concreteness ratings, we indeed observe that both models significantly decode the most concrete nouns; however, accuracy is significantly greater using the text-based models for the most abstract nouns. More generally this confirms that current computational models are sufficiently advanced to assist in investigating the representational structure of abstract concepts in the brain.


Author(s):  
Kim Uittenhove ◽  
Patrick Lemaire

In two experiments, we tested the hypothesis that strategy performance on a given trial is influenced by the difficulty of the strategy executed on the immediately preceding trial, an effect that we call strategy sequential difficulty effect. Participants’ task was to provide approximate sums to two-digit addition problems by using cued rounding strategies. Results showed that performance was poorer after a difficult strategy than after an easy strategy. Our results have important theoretical and empirical implications for computational models of strategy choices and for furthering our understanding of strategic variations in arithmetic as well as in human cognition in general.


2019 ◽  
Author(s):  
Jeffrey N. Chiang ◽  
Yujia Peng ◽  
Hongjing Lu ◽  
Keith J. Holyoak ◽  
Martin M. Monti

AbstractThe ability to generate and process semantic relations is central to many aspects of human cognition. Theorists have long debated whether such relations are coded as atomistic links in a semantic network, or as distributed patterns over some core set of abstract relations. The form and content of the conceptual and neural representations of semantic relations remains to be empirically established. The present study combined computational modeling and neuroimaging to investigate the representation and comparison of abstract semantic relations in the brain. By using sequential presentation of verbal analogies, we decoupled the neural activity associated with encoding the representation of the first-order semantic relation between words in a pair from that associated with the second-order comparison of two relations. We tested alternative computational models of relational similarity in order to distinguish between rival accounts of how semantic relations are coded and compared in the brain. Analyses of neural similarity patterns supported the hypothesis that semantic relations are coded, in the parietal cortex, as distributed representations over a pool of abstract relations specified in a theory-based taxonomy. These representations, in turn, provide the immediate inputs to the process of analogical comparison, which draws on a broad frontoparietal network. This study sheds light not only on the form of relation representations but also on their specific content.SignificanceRelations provide basic building blocks for language and thought. For the past half century, cognitive scientists exploring human semantic memory have sought to identify the code for relations. In a neuroimaging paradigm, we tested alternative computational models of relation processing that predict patterns of neural similarity during distinct phases of analogical reasoning. The findings allowed us to draw inferences not only about the form of relation representations, but also about their specific content. The core of these distributed representations is based on a relatively small number of abstract relation types specified in a theory-based taxonomy. This study helps to resolve a longstanding debate concerning the nature of the conceptual and neural code for semantic relations in the mind and brain.


2019 ◽  
Author(s):  
M. Alex Kelly ◽  
David Reitter ◽  
Robert West ◽  
Moojan Ghafurian

Computational models of distributional semantics (a.k.a. word embeddings) represent a word's meaning in terms of its relationships with all other words. We examine what grammatical information is encoded in distributional models and investigate the role of indirect associations. Distributional models are sensitive to associations between words at one degree of separation, such as 'tiger' and 'stripes', or two degrees of separation, such as 'soar' and 'fly'. By recursively adding higher levels of representations to a computational, holographic model of semantic memory, we construct a distributional model sensitive to associations between words at arbitrary degrees of separation. We find that word associations at four degrees of separation increase the similarity assigned by the model to English words that share part-of-speech or syntactic type. Word associations at four degrees of separation also improve the ability of the model to construct grammatical English sentences. Our model proposes that human memory uses indirect associations to learn part-of-speech and that the basic associative mechanisms of memory and learning support knowledge of both semantics and grammatical structure.


Author(s):  
Elizabeth Jefferies ◽  
Xiuyi Wang

Semantic processing is a defining feature of human cognition, central not only to language, but also to object recognition, the generation of appropriate actions, and the capacity to use knowledge in reasoning, planning, and problem-solving. Semantic memory refers to our repository of conceptual or factual knowledge about the world. This semantic knowledge base is typically viewed as including “general knowledge” as well as schematic representations of objects and events distilled from multiple experiences and retrieved independently from their original spatial or temporal context. Semantic cognition refers to our ability to flexibly use this knowledge to produce appropriate thoughts and behaviors. Semantic cognition includes at least two interactive components: a long-term store of semantic knowledge and semantic control processes, each supported by a different network. Conceptual representations are organized according to the semantic relationships between items, with different theories proposing different key organizational principles, including sensory versus functional features, domain-specific theory, embodied distributed concepts, and hub-and-spoke theory, in which distributed features are integrated within a heteromodal hub in the anterior temporal lobes. The activity within the network for semantic representation must often be controlled to ensure that the system generates representations and inferences that are suited to the immediate task or context. Semantic control is thought to include both controlled retrieval processes, in which knowledge relevant to the goal or context is accessed in a top-down manner when automatic retrieval is insufficient for the task, and post-retrieval selection to resolve competition between simultaneously active representations. Control of semantic retrieval is supported by a strongly left-lateralized brain network, which partially overlaps with the bilateral network that supports domain-general control, but extends beyond these sites to include regions not typically associated with executive control, including anterior inferior frontal gyrus and posterior middle temporal gyrus. The interaction of semantic control processes with conceptual representations allows meaningful thoughts and behavior to emerge, even when the context requires non-dominant features of the concept to be brought to the fore.


2014 ◽  
Vol 116 (11) ◽  
pp. 1-26
Author(s):  
Joanna Gorin

Background/Context Principles of evidential reasoning have often been discussed in the context of educational and psychological measurement with respect to construct validity and validity arguments. More recently, Mislevy proposed the metaphor of assessment as an evidentiary argument about students’ learning and abilities given their behavior in particular circumstances. An assessment argument consists of a claim one wants to make, typically about student learning, and evidence that supports that claim. From this perspective, the quality of our assessments are a function of both whether we have built our arguments about the right types of claims and whether we have collected sufficient persuasive evidence to support our claims. Purpose This paper examines limitations of the dominant practice in educational assessment of the 20th century, which focuses on relatively simple claims and often rely on a single piece of evidence. This paper considers future educational assessment in terms of principles of evidential reasoning, focusing the discussion on the changes to the claims our assessments must support, the types of evidence needed to support these claims, and the statistical tools available to evaluate our evidence vis-à-vis the claims. An expanded view of assessment is advanced in which assessments based on multiple evidence sources from contextually rich situated learning environments, including unconventional data regarding human competencies, improve our ability to make valid inferences and decisions about all education stakeholders. Conclusions For educational assessment to have the positive impact we intend on educational outcomes, future assessments must leverage technological and computational developments, as well as more contemporary models of human cognition, to build robust complex evidential arguments about the critical competencies that are likely to determine individuals’ success in 21st century society.


2020 ◽  
pp. 1-13 ◽  
Author(s):  
Jeffrey N. Chiang ◽  
Yujia Peng ◽  
Hongjing Lu ◽  
Keith J. Holyoak ◽  
Martin M. Monti

The ability to generate and process semantic relations is central to many aspects of human cognition. Theorists have long debated whether such relations are coarsely coded as links in a semantic network or finely coded as distributed patterns over some core set of abstract relations. The form and content of the conceptual and neural representations of semantic relations are yet to be empirically established. Using sequential presentation of verbal analogies, we compared neural activities in making analogy judgments with predictions derived from alternative computational models of relational dissimilarity to adjudicate among rival accounts of how semantic relations are coded and compared in the brain. We found that a frontoparietal network encodes the three relation types included in the design. A computational model based on semantic relations coded as distributed representations over a pool of abstract relations predicted neural activities for individual relations within the left superior parietal cortex and for second-order comparisons of relations within a broader left-lateralized network.


2012 ◽  
Vol 367 (1585) ◽  
pp. 103-117 ◽  
Author(s):  
Katerina Pastra ◽  
Yiannis Aloimonos

Language and action have been found to share a common neural basis and in particular a common ‘syntax’, an analogous hierarchical and compositional organization. While language structure analysis has led to the formulation of different grammatical formalisms and associated discriminative or generative computational models, the structure of action is still elusive and so are the related computational models. However, structuring action has important implications on action learning and generalization, in both human cognition research and computation. In this study, we present a biologically inspired generative grammar of action, which employs the structure-building operations and principles of Chomsky's Minimalist Programme as a reference model. In this grammar, action terminals combine hierarchically into temporal sequences of actions of increasing complexity; the actions are bound with the involved tools and affected objects and are governed by certain goals. We show, how the tool role and the affected-object role of an entity within an action drives the derivation of the action syntax in this grammar and controls recursion, merge and move, the latter being mechanisms that manifest themselves not only in human language, but in human action too.


Author(s):  
Michael N. Jones ◽  
Jon Willits ◽  
Simon Dennis

Meaning is a fundamental component of nearly all aspects of human cognition, but formal models of semantic memory have classically lagged behind many other areas of cognition. However, computational models of semantic memory have seen a surge of progress in the last two decades, advancing our knowledge of how meaning is constructed from experience, how knowledge is represented and used, and what processes are likely to be culprit in disorders characterized by semantic impairment. This chapter provides an overview of several recent clusters of models and trends in the literature, including modern connectionist and distributional models of semantic memory, and contemporary advances in grounding semantic models with perceptual information and models of compositional semantics. Several common lessons have emerged from both the connectionist and distributional literatures, and we attempt to synthesize these themes to better focus future developments in semantic modeling.


2016 ◽  
Vol 116 (2) ◽  
pp. 812-824 ◽  
Author(s):  
Samuel Andrew Hires ◽  
Adam Schuyler ◽  
Jonathan Sy ◽  
Vincent Huang ◽  
Isis Wyche ◽  
...  

The sense of touch is represented by neural activity patterns evoked by mechanosensory input forces. The rodent whisker system is exceptional for studying the neurophysiology of touch in part because these forces can be precisely computed from video of whisker deformation. We evaluate the accuracy of a standard model of whisker bending, which assumes quasi-static dynamics and a linearly tapered conical profile, using controlled whisker deflections. We find significant discrepancies between model and experiment: real whiskers bend more than predicted upon contact at locations in the middle of the whisker and less at distal locations. Thus whiskers behave as if their stiffness near the base and near the tip is larger than expected for a homogeneous cone. We assess whether contact direction, friction, inhomogeneous elasticity, whisker orientation, or nonconical shape could explain these deviations. We show that a thin-middle taper of mouse whisker shape accounts for the majority of this behavior. This taper is conserved across rows and columns of the whisker array. The taper has a large effect on the touch-evoked forces and the ease with which whiskers slip past objects, which are key drivers of neural activity in tactile object localization and identification. This holds for orientations with intrinsic whisker curvature pointed toward, away from, or down from objects, validating two-dimensional models of simple whisker-object interactions. The precision of computational models relating sensory input forces to neural activity patterns can be quantitatively enhanced by taking thin-middle taper into account with a simple corrective function that we provide.


Sign in / Sign up

Export Citation Format

Share Document