scholarly journals Neural dynamics of semantic composition

2019 ◽  
Vol 116 (42) ◽  
pp. 21318-21327 ◽  
Author(s):  
Bingjiang Lyu ◽  
Hun S. Choi ◽  
William D. Marslen-Wilson ◽  
Alex Clarke ◽  
Billi Randall ◽  
...  

Human speech comprehension is remarkable for its immediacy and rapidity. The listener interprets an incrementally delivered auditory input, millisecond by millisecond as it is heard, in terms of complex multilevel representations of relevant linguistic and nonlinguistic knowledge. Central to this process are the neural computations involved in semantic combination, whereby the meanings of words are combined into more complex representations, as in the combination of a verb and its following direct object (DO) noun (e.g., “eat the apple”). These combinatorial processes form the backbone for incremental interpretation, enabling listeners to integrate the meaning of each word as it is heard into their dynamic interpretation of the current utterance. Focusing on the verb-DO noun relationship in simple spoken sentences, we applied multivariate pattern analysis and computational semantic modeling to source-localized electro/magnetoencephalographic data to map out the specific representational constraints that are constructed as each word is heard, and to determine how these constraints guide the interpretation of subsequent words in the utterance. Comparing context-independent semantic models of the DO noun with contextually constrained noun models reflecting the semantic properties of the preceding verb, we found that only the contextually constrained model showed a significant fit to the brain data. Pattern-based measures of directed connectivity across the left hemisphere language network revealed a continuous information flow among temporal, inferior frontal, and inferior parietal regions, underpinning the verb’s modification of the DO noun’s activated semantics. These results provide a plausible neural substrate for seamless real-time incremental interpretation on the observed millisecond time scales.

Open Mind ◽  
2019 ◽  
Vol 3 ◽  
pp. 1-12 ◽  
Author(s):  
Sarah L. Dziura ◽  
James C. Thompson

Social functioning involves learning about the social networks in which we live and interact; knowing not just our friends, but also who is friends with our friends. This study utilized an incidental learning paradigm and representational similarity analysis (RSA), a functional MRI multivariate pattern analysis technique, to examine the relationship between learning social networks and the brain’s response to the faces within the networks. We found that accuracy of learning face pair relationships through observation is correlated with neural similarity patterns to those pairs in the left temporoparietal junction (TPJ), the left fusiform gyrus, and the subcallosal ventromedial prefrontal cortex (vmPFC), all areas previously implicated in social cognition. This model was also significant in portions of the cerebellum and thalamus. These results show that the similarity of neural patterns represent how accurately we understand the closeness of any two faces within a network. Our findings indicate that these areas of the brain not only process knowledge and understanding of others, but also support learning relations between individuals in groups.


2014 ◽  
Vol 26 (1) ◽  
pp. 132-142 ◽  
Author(s):  
Thomas A. Carlson ◽  
J. Brendan Ritchie ◽  
Nikolaus Kriegeskorte ◽  
Samir Durvasula ◽  
Junsheng Ma

How does the brain translate an internal representation of an object into a decision about the object's category? Recent studies have uncovered the structure of object representations in inferior temporal cortex (IT) using multivariate pattern analysis methods. These studies have shown that representations of individual object exemplars in IT occupy distinct locations in a high-dimensional activation space, with object exemplar representations clustering into distinguishable regions based on category (e.g., animate vs. inanimate objects). In this study, we hypothesized that a representational boundary between category representations in this activation space also constitutes a decision boundary for categorization. We show that behavioral RTs for categorizing objects are well described by our activation space hypothesis. Interpreted in terms of classical and contemporary models of decision-making, our results suggest that the process of settling on an internal representation of a stimulus is itself partially constitutive of decision-making for object categorization.


2017 ◽  
Author(s):  
Fernando M. Ramírez

AbstractThe use of multivariate pattern analysis (MVPA) methods has enjoyed this past decade a rapid increase in popularity among neuroscientists. More recently, similarity-based multivariate methods aiming not only to extract information regarding the class membership of stimuli from their associated brain patterns, say, decode a face from a potato, but to understand the form of the underlying representational structure associated with stimulus dimensions of interest, say, 2D grating or 3D face orientation, have flourished under the name of Representational Similarity Analysis (RSA). However, data-preprocessing steps implemented prior to RSA can significantly change the covariance (and correlation) structure of the data, hence possibly leading to representational confusion—i.e., a researcher inferring that brain area A encodes information according to representational scheme X, and not Y, when the opposite is true. Here, I demonstrate with simulations that time-series demeaning (including z-scoring) can plausibly lead to representational confusion. Further, I expose potential interactions between the effects of data demeaning and how the brain happens to encode information. Finally, I emphasize the importance in the context of similarity analyses of at least occasionally explicitly considering the direction of pattern vectors in multivariate space, rather than focusing exclusively on the relative location of their endpoints. Overall, I expect this article will promote awareness of the impact of data demeaning on inferences regarding representational structure and neural coding.


2014 ◽  
Vol 26 (3) ◽  
pp. 658-681 ◽  
Author(s):  
Andrew J. Anderson ◽  
Brian Murphy ◽  
Massimo Poesio

Most studies of conceptual knowledge in the brain focus on a narrow range of concrete conceptual categories, rely on the researchers' intuitions about which object belongs to these categories, and assume a broadly taxonomic organization of knowledge. In this fMRI study, we focus on concepts with a variety of concreteness levels; we use a state of the art lexical resource (WordNet 3.1) as the source for a relatively large number of category distinctions and compare a taxonomic style of organization with a domain-based model (an example domain is Law). Participants mentally simulated situations associated with concepts when cued by text stimuli. Using multivariate pattern analysis, we find evidence that all Taxonomic categories and Domains can be distinguished from fMRI data and also observe a clear concreteness effect: Tools and Locations can be reliably predicted for unseen participants, but less concrete categories (e.g., Attributes, Communications, Events, Social Roles) can only be reliably discriminated within participants. A second concreteness effect relates to the interaction of Domain and Taxonomic category membership: Domain (e.g., relation to Law vs. Music) can be better predicted for less concrete categories. We repeated the analysis within anatomical regions, observing discrimination between all/most categories in the left mid occipital and left mid temporal gyri, and more specialized discrimination for concrete categories Tool and Location in the left precentral and fusiform gyri, respectively. Highly concrete/abstract Taxonomic categories and Domain were segregated in frontal regions. We conclude that both Taxonomic and Domain class distinctions are relevant for interpreting neural structuring of concrete and abstract concepts.


2020 ◽  
Author(s):  
Zitong Lu ◽  
Yixuan Ku

AbstractIn studies of cognitive neuroscience, multivariate pattern analysis (MVPA) is widely used as it offers richer information than traditional univariate analysis. Representational similarity analysis (RSA), as one method of MVPA, has become an effective decoding method based on neural data by calculating the similarity between different representations in the brain under different conditions. Moreover, RSA is suitable for researchers to compare data from different modalities, and even bridge data from different species. However, previous toolboxes have been made to fit for specific datasets. Here, we develop a novel and easy-to-use toolbox based on Python named NeuroRA for representational analysis. Our toolbox aims at conducting cross-modal data analysis from multi-modal neural data (e.g. EEG, MEG, fNIRS, ECoG, sEEG, neuroelectrophysiology, fMRI), behavioral data, and computer simulated data. Compared with previous software packages, our toolbox is more comprehensive and powerful. By using NeuroRA, users can not only calculate the representational dissimilarity matrix (RDM), which reflects the representational similarity between different conditions, but also conduct a representational analysis among different RDMs to achieve a cross-modal comparison. In addition, users can calculate neural pattern similarity, spatiotemporal pattern similarity (STPS) and inter-subject correlation (ISC) with this toolbox. NeuroRA also provides users with functions performing statistical analysis, storage and visualization of results. We introduce the structure, modules, features, and algorithms of NeuroRA in this paper, as well as examples applying the toolbox in published datasets.


2017 ◽  
Author(s):  
J. Brendan Ritchie ◽  
David Michael Kaplan ◽  
Colin Klein

AbstractSince its introduction, multivariate pattern analysis (MVPA), or “neural decoding”, has transformed the field of cognitive neuroscience. Underlying its influence is a crucial inference, which we call the Decoder’s Dictum: if information can be decoded from patterns of neural activity, then this provides strong evidence about what information those patterns represent. Although the Dictum is a widely held and well-motivated principle in decoding research, it has received scant philosophical attention. We critically evaluate the Dictum, arguing that it is false: decodability is a poor guide for revealing the content of neural representations. However, we also suggest how the Dictum can be improved on, in order to better justify inferences about neural representation using MVPA.


2019 ◽  
Author(s):  
Zhiai Li ◽  
Hongbo Yu ◽  
Yongdi Zhou ◽  
Tobias Kalenscher ◽  
Xiaolin Zhou

AbstractPeople do not only feel guilty for transgressions of social norms/expectations that they are causally responsible for, but they also feel guilty for transgressions committed by those they identify as in-group (i.e., collective or group-based guilt). However, the neurocognitive basis of group-based guilt and its relation to personal guilt are unknown. To address these questions, we combined functional MRI with an interaction-based minimal group paradigm in which participants either directly caused harm to victims (i.e., personal guilt), or observed in-group members cause harm to the victims (i.e., group-based guilt). In three experiments (N = 90), we demonstrated that perceived shared responsibility with in-group members in the transgression predicted behavioral and neural manifestations of group-based guilt. Multivariate pattern analysis of the functional MRI data showed that group-based guilt recruited a similar brain representation in anterior middle cingulate cortex as personal guilt. These results have broaden our understanding of how group membership is integrated into social emotions.


2012 ◽  
Vol 107 (2) ◽  
pp. 628-639 ◽  
Author(s):  
David E. J. Linden ◽  
Nikolaas N. Oosterhof ◽  
Christoph Klein ◽  
Paul E. Downing

How is working memory for different visual categories supported in the brain? Do the same principles of cortical specialization that govern the initial processing and encoding of visual stimuli also apply to their short-term maintenance? We investigated these questions with a delayed discrimination paradigm for faces, bodies, flowers, and scenes and applied both univariate and multivariate analyses to functional magnetic resonance imaging (fMRI) data. Activity during encoding followed the well-known specialization in posterior areas. During the delay interval, activity shifted to frontal and parietal regions but was not specialized for category. Conversely, activity in visual areas returned to baseline during that interval but showed some evidence of category specialization on multivariate pattern analysis (MVPA). We conclude that principles of cortical activation differ between encoding and maintenance of visual material. Whereas perceptual processes rely on specialized regions in occipitotemporal cortex, maintenance involves the activation of a frontoparietal network that seems to require little specialization at the category level. We also confirm previous findings that MVPA can extract information from fMRI signals in the absence of suprathreshold activation and that such signals from visual areas can reflect the material stored in memory.


2019 ◽  
Author(s):  
Andrew A. Chen ◽  
Joanne C. Beer ◽  
Nicholas J. Tustison ◽  
Philip A. Cook ◽  
Russell T. Shinohara ◽  
...  

AbstractTo acquire larger samples for answering complex questions in neuroscience, researchers have increasingly turned to multi-site neuroimaging studies. However, these studies are hindered by differences in images acquired across multiple scanners. These effects have been shown to bias comparison between scanners, mask biologically meaningful associations, and even introduce spurious associations. To address this, the field has focused on harmonizing data by removing scanner-related effects in the mean and variance of measurements. Contemporaneously with the increase in popularity of multi-center imaging, the use of multivariate pattern analysis (MVPA) has also become commonplace. These approaches have been shown to provide improved sensitivity, specificity, and power due to their modeling the joint relationship across measurements in the brain. In this work, we demonstrate that methods for removing scanner effects in mean and variance may not be sufficient for MVPA. This stems from the fact that such methods fail to address how correlations between measurements can vary across scanners. Data from the Alzheimer’s Disease Neuroimaging Initiative is used to show that considerable differences in covariance exist across scanners and that popular harmonization techniques do not address this issue. We also propose a novel methodology that harmonizes covariance of multivariate image measurements across scanners and demonstrate its improved performance in data harmonization.


Sign in / Sign up

Export Citation Format

Share Document