scholarly journals Building semantic memory from embodied and distributional language experience

2021 ◽  
Author(s):  
Charles P. Davis ◽  
Eiling Yee

Humans seamlessly make sense of a rapidly changing environment, using a seemingly limitless knowledgebase to recognize and adapt to most situations we encounter. This knowledgebase is called semantic memory. Embodied cognition theories suggest that we represent this knowledge through simulation: understanding the meaning of coffee entails re-instantiating the neural states involved in touching, smelling, seeing, and drinking coffee. Distributional semantic theories suggest that we are sensitive to statistical regularities in natural language, and that a cognitive mechanism picks up on these regularities and transforms them into usable semantic representations reflecting the contextual usage of language. These appear to present contrasting views on semantic memory, but do they? Recent years have seen a push toward combining these approaches under a common framework. These hybrid approaches augment our understanding of semantic memory in important ways, but current versions remain unsatisfactory in part because they treat sensory-perceptual and distributional-linguistic data as interacting but distinct types of data that must be combined. We synthesize several approaches which, taken together, suggest that linguistic and embodied experience should instead be considered as inseparably entangled: just as sensory and perceptual systems are reactivated to understand meaning, so are experience-based representations endemic to linguistic processing; further, sensory-perceptual experience is susceptible to the same distributional principles as language experience. This conclusion produces a characterization of semantic memory that accounts for the interdependencies between linguistic and embodied data that arise across multiple timescales, giving rise to concept representations that reflect our shared and unique experiences.

2020 ◽  
Author(s):  
Stephen Charles Van Hedger ◽  
Ingrid Johnsrude ◽  
Laura Batterink

Listeners are adept at extracting regularities from the environment, a process known as statistical learning (SL). SL has been generally assumed to be a form of “context-free” learning that occurs independently of prior knowledge, and SL experiments typically involve exposing participants to presumed novel regularities, such as repeating nonsense words. However, recent work has called this assumption into question, demonstrating that learners’ previous language experience can considerably influence SL performance. In the present experiment, we tested whether previous knowledge also shapes SL in a non-linguistic domain, using a paradigm that involves extracting regularities over tone sequences. Participants learned novel tone sequences, which consisted of pitch intervals not typically found in Western music. For one group of participants, the tone sequences used artificial, computerized instrument sounds. For the other group, the same tone sequences used familiar instrument sounds (piano or violin). Knowledge of the statistical regularities was assessed using both trained sounds (measuring specific learning) and sounds that differed in pitch range and/or instrument (measuring transfer learning). In a follow-up experiment, two additional testing sessions were administered to gauge retention of learning (one day and approximately one-week post-training). Compared to artificial instruments, training on sequences played by familiar instruments resulted in reduced correlations among test items, reflecting more idiosyncratic performance. Across all three testing sessions, learning of novel regularities presented with familiar instruments was worse compared to unfamiliar instruments, suggesting that prior exposure to music produced by familiar instruments interfered with new sequence learning. Overall, these results demonstrate that real-world experience influences SL in a non-linguistic domain, supporting the view that SL involves the continuous updating of existing representations, rather than the establishment of entirely novel ones.


2021 ◽  
Author(s):  
Jerry Tang ◽  
Amanda LeBel ◽  
Alexander G Huth

The human semantic system stores knowledge acquired through both perception and language. To study how semantic representations in cortex integrate perceptual and linguistic information, we created semantic word embedding spaces that combine models of visual and linguistic processing. We then used these visually-grounded semantic spaces to fit voxelwise encoding models to fMRI data collected while subjects listened to hours of narrative stories. We found that cortical regions near the visual system represent concepts by combining visual and linguistic information, while regions near the language system represent concepts using mostly linguistic information. Assessing individual representations near visual cortex, we found that more concrete concepts contain more visual information, while even abstract concepts contain some amount of visual information from associated concrete concepts. Finally we found that these visual grounding effects are localized near visual cortex, suggesting that semantic representations specifically reflect the modality of adjacent perceptual systems. Our results provide a computational account of how visual and linguistic information are combined to represent concrete and abstract concepts across cortex.


2008 ◽  
Vol 2008 ◽  
pp. 1-16 ◽  
Author(s):  
I. E. Antoniou ◽  
E. T. Tsompa

The purpose of this paper is to assess the statistical characterization of weighted networks in terms of the generalization of the relevant parameters, namely, average path length, degree distribution, and clustering coefficient. Although the degree distribution and the average path length admit straightforward generalizations, for the clustering coefficient several different definitions have been proposed in the literature. We examined the different definitions and identified the similarities and differences between them. In order to elucidate the significance of different definitions of the weighted clustering coefficient, we studied their dependence on the weights of the connections. For this purpose, we introduce the relative perturbation norm of the weights as an index to assess the weight distribution. This study revealed new interesting statistical regularities in terms of the relative perturbation norm useful for the statistical characterization of weighted graphs.


2021 ◽  
Author(s):  
Meighen Roes ◽  
Abhijit Mahesh Chinchani ◽  
Todd Woodward

Patients with schizophrenia exhibit deficits in associative learning and semantic memory. The current functional magnetic resonance imaging (fMRI) study investigated the neural correlates of successful versus unsuccessful semantic associative encoding in schizophrenia compared to healthy controls. Publicly shared fMRI data from the UCLA Consortium for Neuropsychiatric Phenomics LA5C study were analyzed. Forty-four patients with schizophrenia and 78 healthy controls performed a paired-associates encoding task. Constrained principal component analysis for fMRI (fMRI-CPCA) revealed three distinct functional networks recruited during encoding: a responding (RESP) network, a linguistic processing/attention network (LANG/ATTN), and the default mode network (DMN). Relative to healthy controls, patients showed aberrant activity in all three networks; namely, hypo-activation in the LANG/ATTN network during successful encoding, lower peak activation and weaker post-activation suppression of the RESP network, and weaker suppression in the DMN during successful encoding. Independent of group effects, a pattern of stronger anticorrelating LANG/ATTN-DMN activity during successful encoding significantly predicted subsequent retrieval of paired associates. Together with previous observations of language network hypoactivation during controlled semantic associative memory processes, these results suggest that reduced activity in linguistic processing areas is a reliable biological marker associated with impaired semantic memory in schizophrenia.


Languages ◽  
2021 ◽  
Vol 6 (4) ◽  
pp. 168
Author(s):  
Anne L. Beatty-Martínez ◽  
Debra A. Titone

Increasing evidence suggests that bilingualism does not, in itself, result in a particular pattern of response, revealing instead a complex and multidimensional construct that is shaped by evolutionary and ecological sources of variability. Despite growing recognition of the need for a richer characterization of bilingual speakers and of the different contexts of language use, we understand relatively little about the boundary conditions of putative “bilingualism” effects. Here, we review recent findings that demonstrate how variability in the language experiences of bilingual speakers, and also in the ability of bilingual speakers to adapt to the distinct demands of different interactional contexts, impact interactions between language use, language processing, and cognitive control processes generally. Given these findings, our position is that systematic variation in bilingual language experience gives rise to a variety of phenotypes that have different patterns of associations across language processing and cognitive outcomes. The goal of this paper is thus to illustrate how focusing on systematic variation through the identification of bilingual phenotypes can provide crucial insights into a variety of performance patterns, in a manner that has implications for previous and future research.


2008 ◽  
Vol 16 (3) ◽  
pp. 568-585 ◽  
Author(s):  
Izchak M. Schlesinger ◽  
Sharon Hurvitz

In this paper we introduce a detailed and multi-faceted characterization of misunderstandings. The proposal attempts to capture the structure of misunderstandings in terms of several constructs: the message as intended by the speaker, the message as construed by the hearer, and the message as understood by an ‘objective’ judge. In addition, we suggest that the message the speaker intends the hearer to retrieve and the hearer’s perception of the speaker’s intentions should also be taken into account. Misunderstandings can also be classified according to the phase of the comprehension process at which they occur (the perception of the speaker’s message, its linguistic processing, discovering the implicatures, and so on).


2017 ◽  
Vol 114 (30) ◽  
pp. 8083-8088 ◽  
Author(s):  
Jan-Mathijs Schoffelen ◽  
Annika Hultén ◽  
Nietzsche Lam ◽  
André F. Marquand ◽  
Julia Uddén ◽  
...  

The brain’s remarkable capacity for language requires bidirectional interactions between functionally specialized brain regions. We used magnetoencephalography to investigate interregional interactions in the brain network for language while 102 participants were reading sentences. Using Granger causality analysis, we identified inferior frontal cortex and anterior temporal regions to receive widespread input and middle temporal regions to send widespread output. This fits well with the notion that these regions play a central role in language processing. Characterization of the functional topology of this network, using data-driven matrix factorization, which allowed for partitioning into a set of subnetworks, revealed directed connections at distinct frequencies of interaction. Connections originating from temporal regions peaked at alpha frequency, whereas connections originating from frontal and parietal regions peaked at beta frequency. These findings indicate that the information flow between language-relevant brain areas, which is required for linguistic processing, may depend on the contributions of distinct brain rhythms.


Sign in / Sign up

Export Citation Format

Share Document