Finding Concepts in Brain Patterns

Author(s):  
Elizabeth Musz ◽  
Sharon L. Thompson-Schill

Semantic memory is composed of one’s accumulated world knowledge. This includes one’s stored factual information about the real-world objects and animals, which enables one to recognize and interact with the things in one’s environment. How is this semantic information organized, and where is it stored in the brain? Newly developed functional neuroimaging (fMRI) methods have provided exciting and innovative approaches to studying these questions. In particular, several recent fMRI investigations have examined the neural bases of semantic knowledge using similarity-based approaches. In similarity models, data from direct (i.e., neural) and indirect (i.e., subjective, psychological) measurements are interpreted as proximity data that provide information about the relationships among object concepts in an abstract, high-dimensional space. Concepts are encoded as points in this conceptual space, such that the semantic relatedness between two concepts is determined by their distance from one another. Using this approach, neuroimaging studies have offered compelling insights to several open-ended questions about how object concepts are represented in the brain. This chapter briefly describes how similarity spaces are computed from both behavioral data and spatially distributed fMRI activity patterns. Then, it reviews empirical reports that relate observed neural similarity spaces to various models of semantic similarity. The chapter examines how these methods have both shaped and informed our current understanding of the neural representation of conceptual information about real-world objects.

2016 ◽  
Author(s):  
Waitsang Keung ◽  
Daniel Osherson ◽  
Jonathan D. Cohen

AbstractThe neural representation of an object can change depending on its context. For instance, a horse may be more similar to a bear than to a dog in terms of size, but more similar to a dog in terms of domesticity. We used behavioral measures of similarity together with representational similarity analysis and functional connectivity of fMRI data in humans to reveal how the neural representation of semantic knowledge can change to match the current goal demand. Here we present evidence that objects similar to each other in a given context are also represented more similarly in the brain and that these similarity relationships are modulated by context specific activations in frontal areas.Significance statementThe judgment of similarity between two objects can differ in different contexts. Here we report a study that tested the hypothesis that brain areas associated with task context and cognitive control modulate semantic representations of objects in a task-specific way.We first demonstrate that task instructions impact how objects are represented in the brain. We then show that the expression of these representations is correlated with activity in regions of frontal cortex widely thought to represent context, attention and control.In addition, we introduce spatial variance as a novel index of representational expression and attentional modulation. This promises to lay the groundwork for more exacting studies of the neural basis of semantics, as well as the dynamics of attentional modulation.


2018 ◽  
Author(s):  
Mark Allen Thornton ◽  
Miriam E. Weaverdyck ◽  
Diana Tamir

Social life requires us to treat each person according to their unique disposition: habitually enthusiastic friends need occasional grounding, whereas pessimistic colleagues require cheering-up. To tailor our behavior to specific individuals, we must represent their idiosyncrasies. Here we advance a hypothesis about how the brain achieves this goal: our representations of other people reflect the mental states we perceive those people to habitually experience. That is, rather than representing other people via traits, our brains represent people as the sums of their states. For example, if a perceiver observes that another person is frequently cheerful, sometimes thoughtful, and rarely grumpy, the perceiver’s representation of that person will be comprised of their representations of the mental states cheerfulness, thoughtfulness, and grumpiness, combined in a corresponding ratio. We tested this hypothesis by measuring whether neural representations of people could be accurately reconstructed by summing state representations. Separate participants underwent functional neuroimaging while considering famous individuals and individual mental states. Online participants rated how often each famous person experiences each state. Results supported the summed state hypothesis: frequency-weighted sums of state-specific brain activity patterns accurately reconstructed person-specific patterns. Moreover, the summed state account outperformed the established alternative – that people represent others using trait dimensions – in explaining interpersonal similarity, as measured through neural patterns, explicit ratings, binary choices, reaction times, and the semantics of biographical text. Together these findings demonstrate that the brain represents other people as the sums of the mental states they are perceived to experience.


Psychology ◽  
2019 ◽  
Author(s):  
Michael N. Jones ◽  
Johnathan Avery

Semantic memory refers to our general world knowledge that encompasses memory for concepts, facts, and the meanings of words and other symbolic units that constitute formal communication systems such as language or math. In the classic hierarchical view of memory, declarative memory was subdivided into two independent modules: episodic memory, which is our autobiographical store of individual events, and semantic memory, which is our general store of abstracted knowledge. However, more recent theoretical accounts have greatly reduced the independence of these two memory systems, and episodic memory is typically viewed as a gateway to semantic memory accessed through the process of abstraction. Modern accounts view semantic memory as deeply rooted in sensorimotor experience, abstracted across many episodic memories to highlight the stable characteristics and mute the idiosyncratic ones. A great deal of research in neuroscience has focused on both how the brain creates semantic memories and what brain regions share the responsibility for storage and retrieval of semantic knowledge. These include many classic experiments that studied the behavior of individuals with brain damage and various types of semantic disorders but also more modern studies that employ neuroimaging techniques to study how the brain creates and stores semantic memories. Classically, semantic memory had been treated as a miscellaneous area of study for anything in declarative memory that was not clearly within the realm of episodic memory, and formal models of meaning in memory did not advance at the pace of models of episodic memory. However, recent developments in neural networks and corpus-based tools for modeling text have greatly increased the sophistication of models of semantic memory. There now exist several good computational accounts to explain how humans transform first-order experience with the world into deep semantic representations and how these representations are retrieved and used in meaning-based behavioral tasks. The purpose of this article is to provide the reader with the more salient publications, reviews, and themes of major advances in the various subfields of semantic memory over the past forty-five years. For more in-depth coverage, we refer the reader to the manuscripts in the General Overviews section.


2009 ◽  
Vol 34 ◽  
pp. 443-498 ◽  
Author(s):  
E. Gabrilovich ◽  
S. Markovitch

Adequate representation of natural language semantics requires access to vast amounts of common sense and domain-specific world knowledge. Prior work in the field was based on purely statistical techniques that did not make use of background knowledge, on limited lexicographic knowledge bases such as WordNet, or on huge manual efforts such as the CYC project. Here we propose a novel method, called Explicit Semantic Analysis (ESA), for fine-grained semantic interpretation of unrestricted natural language texts. Our method represents meaning in a high-dimensional space of concepts derived from Wikipedia, the largest encyclopedia in existence. We explicitly represent the meaning of any text in terms of Wikipedia-based concepts. We evaluate the effectiveness of our method on text categorization and on computing the degree of semantic relatedness between fragments of natural language text. Using ESA results in significant improvements over the previous state of the art in both tasks. Importantly, due to the use of natural concepts, the ESA model is easy to explain to human users.


1998 ◽  
Vol 34 (2) ◽  
pp. 387-414 ◽  
Author(s):  
ALEX LASCARIDES ◽  
ANN COPESTAKE

In this paper, we explore the interaction between lexical semantics and pragmatics. We argue that linguistic processing is informationally encapsulated and utilizes relatively simple ‘taxonomic’ lexical semantic knowledge. On this basis, defeasible lexical generalisations deliver defeasible parts of logical form. In contrast, pragmatic inference is open-ended and involves arbitrary real-world knowledge. Two axioms specify when pragmatic defaults override lexical ones. We demonstrate that modelling this interaction allows us to achieve a more refined interpretation of words in a discourse context than either the lexicon or pragmatics could do on their own.


2016 ◽  
Author(s):  
Ghootae Kim ◽  
Kenneth A. Norman ◽  
Nicholas B. Turk-Browne

AbstractWhen an item is predicted in a particular context but the prediction is violated, memory for that item is weakened (Kim et al., 2014). Here we explore what happens when such previously mispredicted items are later re-encountered. According to prior neural network simulations, this sequence of events - misprediction and subsequent restudy - should lead to differentiation of the item's neural representation from the previous context (on which the misprediction was based). Specifically, misprediction weakens connections in the representation to features shared with the previous context, and restudy allows new features to be incorporated into the representation that are not shared with the previous context. This cycle of misprediction and restudy should have the net effect of moving the item‘s neural representation away from the neural representation of the previous context. We tested this hypothesis using fMRI, by tracking changes in item-specific BOLD activity patterns in the hippocampus, a key structure for representing memories and generating predictions. In left CA2/3/DG, we found greater neural differentiation for items that were repeatedly mispredicted and restudied compared to items from a control condition that was identical except without misprediction. We also measured prediction strength in a trial-by-trial fashion and found that greater misprediction for an item led to more differentiation, further supporting our hypothesis. Thus, the consequences of prediction error go beyond memory weakening: If the mispredicted item is restudied, the brain adaptively differentiates its memory representation to improve the accuracy of subsequent predictions and to shield it from further weakening.SignificanceCompetition between overlapping memories leads to weakening of non-target memories over time, making it easier to access target memories. However, a non-target memory in one context might become a target memory in another context. How do such memories get re-strengthened without increasing competition again? Computational models suggest that the brain handles this by reducing neural connections to the previous context and adding connections to new features that were not part of the previous context. The result is neural differentiation away from the previous context. Here provide support for this theory, using fMRI to track neural representations of individual memories in the hippocampus and how they change based on learning.


2021 ◽  
pp. 1-15
Author(s):  
Konstantinos Bromis ◽  
Petar P. Raykov ◽  
Leah Wickens ◽  
Warrick Roseboom ◽  
Chris M. Bird

Abstract An episodic memory is specific to an event that occurred at a particular time and place. However, the elements that comprise the event—the location, the people present, and their actions and goals—might be shared with numerous other similar events. Does the brain preferentially represent certain elements of a remembered event? If so, which elements dominate its neural representation: those that are shared across similar events, or the novel elements that define a specific event? We addressed these questions by using a novel experimental paradigm combined with fMRI. Multiple events were created involving conversations between two individuals using the format of a television chat show. Chat show “hosts” occurred repeatedly across multiple events, whereas the “guests” were unique to only one event. Before learning the conversations, participants were scanned while viewing images or names of the (famous) individuals to be used in the study to obtain person-specific activity patterns. After learning all the conversations over a week, participants were scanned for a second time while they recalled each event multiple times. We found that during recall, person-specific activity patterns within the posterior midline network were reinstated for the hosts of the shows but not the guests, and that reinstatement of the hosts was significantly stronger than the reinstatement of the guests. These findings demonstrate that it is the more generic, familiar, and predictable elements of an event that dominate its neural representation compared with the more idiosyncratic, event-defining, elements.


2015 ◽  
Vol 27 (11) ◽  
pp. 2215-2228 ◽  
Author(s):  
Mante S. Nieuwland

How does knowledge of real-world events shape our understanding of incoming language? Do temporal terms like “before” and “after” impact the online recruitment of real-world event knowledge? These questions were addressed in two ERP experiments, wherein participants read sentences that started with “before” or “after” and contained a critical word that rendered each sentence true or false (e.g., “Before/After the global economic crisis, securing a mortgage was easy/harder”). The critical words were matched on predictability, rated truth value, and semantic relatedness to the words in the sentence. Regardless of whether participants explicitly verified the sentences or not, false-after-sentences elicited larger N400s than true-after-sentences, consistent with the well-established finding that semantic retrieval of concepts is facilitated when they are consistent with real-world knowledge. However, although the truth judgments did not differ between before- and after-sentences, no such sentence N400 truth value effect occurred in before-sentences, whereas false-before-sentences elicited an enhanced subsequent positive ERPs. The temporal term “before” itself elicited more negative ERPs at central electrode channels than “after.” These patterns of results show that, irrespective of ultimate sentence truth value judgments, semantic retrieval of concepts is momentarily facilitated when they are consistent with the known event outcome compared to when they are not. However, this inappropriate facilitation incurs later processing costs as reflected in the subsequent positive ERP deflections. The results suggest that automatic activation of event knowledge can impede the incremental semantic processes required to establish sentence truth value.


Author(s):  
Hugues Duffau

Investigating the neural and physiological basis of language is one of the most important challenges in neurosciences. Direct electrical stimulation (DES), usually performed in awake patients during surgery for cerebral lesions, is a reliable tool for detecting both cortical and subcortical (white matter and deep grey nuclei) regions crucial for cognitive functions, especially language. DES transiently interacts locally with a small cortical or axonal site, but also nonlocally, as the focal perturbation will disrupt the entire subnetwork sustaining a given function. Thus, in contrast to functional neuroimaging, DES represents a unique opportunity to identify with great accuracy and reproducibility, in vivo in humans, the structures that are actually indispensable to the function, by inducing a transient virtual lesion based on the inhibition of a subcircuit lasting a few seconds. Currently, this is the sole technique that is able to directly investigate the functional role of white matter tracts in humans. Thus, combining transient disturbances elicited by DES with the anatomical data provided by pre- and postoperative MRI enables to achieve reliable anatomo-functional correlations, supporting a network organization of the brain, and leading to the reappraisal of models of language representation. Finally, combining serial peri-operative functional neuroimaging and online intraoperative DES allows the study of mechanisms underlying neuroplasticity. This chapter critically reviews the basic principles of DES, its advantages and limitations, and what DES can reveal about the neural foundations of language, that is, the large-scale distribution of language areas in the brain, their connectivity, and their ability to reorganize.


Sign in / Sign up

Export Citation Format

Share Document