scholarly journals Using a high-dimensional graph of semantic space to model relationships among words

2014 ◽  
Vol 5 ◽  
Author(s):  
Alice F. Jackson ◽  
Donald J. Bolger
Mathematics ◽  
2021 ◽  
Vol 9 (12) ◽  
pp. 1330
Author(s):  
Raeyong Kim

The conjugacy problem for a group G is one of the important algorithmic problems deciding whether or not two elements in G are conjugate to each other. In this paper, we analyze the graph of group structure for the fundamental group of a high-dimensional graph manifold and study the conjugacy problem. We also provide a new proof for the solvable word problem.


Author(s):  
Cyrus Shaoul ◽  
Chris Westbury

HAL (Hyperspace Analog to Language) is a high-dimensional model of semantic space that uses the global co-occurrence frequency of words in a large corpus of text as the basis for a representation of semantic memory. In the original HAL model, many parameters were set without any a priori rationale. In this chapter we describe a new computer application called the High Dimensional Explorer (HiDEx) that makes it possible to systematically alter the values of the model’s parameters and thereby to examine their effect on the co-occurrence matrix that instantiates the model. New parameter sets give us measures of semantic density that improve the model’s ability to predict behavioral measures. Implications for such models are discussed.


2006 ◽  
Vol 55 (4) ◽  
pp. 534-552 ◽  
Author(s):  
Michael N. Jones ◽  
Walter Kintsch ◽  
Douglas J.K. Mewhort

2017 ◽  
Author(s):  
Morteza Dehghani ◽  
Reihane Boghrati ◽  
Kingson Man ◽  
Joseph Hoover ◽  
Sarah Gimbel ◽  
...  

Drawing from a common lexicon of semantic units, humans fashion narratives whose meaning transcends that of their individual utterances. However, while brain regions that represent lower-level semantic units, such as words and sentences, have been identified, questions remain about the neural representation of narrative comprehension, which involves inferring cumulative meaning. To address these questions, we exposed English, Mandarin and Farsi native speakers to native language translations of the same stories during fMRI scanning. Using a new technique in natural language processing, we calculated the distributed representations of these stories (capturing the meaning of the stories in high-dimensional semantic space), and demonstrate that using these representations we can identify the specific story a participant was reading from the neural data. Notably, this was possible even when the distributed representations were calculated using stories in a different language than the participant was reading. Relying on over 44 billion classifications, our results reveal that identification relied on a collection of brain regions most prominently located in the default mode network. These results demonstrate that neuro-semantic encoding of narratives happens at levels higher than individual semantic units and that this encoding is systematic across both individuals and languages.


2006 ◽  
Author(s):  
Allison B. Kaufman ◽  
Curt Burgess ◽  
Arunava Chakravartty ◽  
Brenda McCowan ◽  
Catherine H. Decker

2019 ◽  
Vol 25 (1) ◽  
pp. 33-49 ◽  
Author(s):  
Mark A. Bedau ◽  
Nicholas Gigliotti ◽  
Tobias Janssen ◽  
Alec Kosik ◽  
Ananthan Nambiar ◽  
...  

We detect ongoing innovation in empirical data about human technological innovations. Ongoing technological innovation is a form of open-ended evolution, but it occurs in a nonbiological, cultural population that consists of actual technological innovations that exist in the real world. The change over time of this population of innovations seems to be quite open-ended. We take patented inventions as a proxy for technological innovations and mine public patent records for evidence of the ongoing emergence of technological innovations, and we compare two ways to detect it. One way detects the first instances of predefined patent pigeonholes, specifically the technology classes listed in the United States Patent Classification (USPC). The second way embeds patents in a high-dimensional semantic space and detects the emergence of new patent clusters. After analyzing hundreds of years of patent records, both methods detect the emergence of new kinds of technologies, but clusters are much better at detecting innovations that are unanticipated and undetected by USPC pigeonholes. Our clustering methods generalize to detect unanticipated innovations in other evolving populations that generate ongoing streams of digital data.


2021 ◽  
pp. 1-12
Author(s):  
Haoyue Bai ◽  
Haofeng Zhang ◽  
Qiong Wang

Zero Shot learning (ZSL) aims to use the information of seen classes to recognize unseen classes, which is achieved by transferring knowledge of the seen classes from the semantic embeddings. Since the domains of the seen and unseen classes do not overlap, most ZSL algorithms often suffer from domain shift problem. In this paper, we propose a Dual Discriminative Auto-encoder Network (DDANet), in which visual features and semantic attributes are self-encoded by using the high dimensional latent space instead of the feature space or the low dimensional semantic space. In the embedded latent space, the features are projected to both preserve their original semantic meanings and have discriminative characteristics, which are realized by applying dual semantic auto-encoder and discriminative feature embedding strategy. Moreover, the cross modal reconstruction is applied to obtain interactive information. Extensive experiments are conducted on four popular datasets and the results demonstrate the superiority of this method.


2006 ◽  
Author(s):  
Allison B. Kaufman ◽  
Curt Burgess ◽  
Brenda McCowan

2007 ◽  
Author(s):  
Curt Burgess ◽  
Chad Murphy ◽  
Martin Johnson ◽  
Shaun Bowler ◽  
Catherine H. Decker

Sign in / Sign up

Export Citation Format

Share Document