A quaternion-group knowledge graph embedding model

2021 ◽  
pp. 1-10
Author(s):  
Heng Chen ◽  
Guanyu Li ◽  
Yunhao Sun ◽  
Wei Jiang

Capturing the composite embedding representation of a multi-hop relation path is an extremely vital task in knowledge graph completion. Recently, rotation-based relation embedding models have been widely studied to embed composite relations into complex vector space. However, these models make some over-simplified assumptions on the composite relations, resulting the relations to be commutative. To tackle this problem, this paper proposes a novel knowledge graph embedding model, named QuatGE, which can provide sufficient modeling capabilities for complex composite relations. In particular, our method models each relation as a rotation operator in quaternion group-based space. The advantages of our model are twofold: (1) Since the quaternion group is a non-commutative group (i.e., non-Abelian group), the corresponding rotation matrices of composite relations can be non-commutative; (2) The model has a more expressive setting with stronger modeling capabilities, which is flexible to model and infer the complete relation patterns, including: symmetry/anti-symmetry, inversion and commutative/non-commutative composition. Experimental results on four benchmark datasets show that the proposed method outperforms the existing state-of-the-art models for link prediction, especially on composite relations.

Electronics ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1407
Author(s):  
Peng Wang ◽  
Jing Zhou ◽  
Yuzhang Liu ◽  
Xingchen Zhou

Knowledge graph embedding aims to embed entities and relations into low-dimensional vector spaces. Most existing methods only focus on triple facts in knowledge graphs. In addition, models based on translation or distance measurement cannot fully represent complex relations. As well-constructed prior knowledge, entity types can be employed to learn the representations of entities and relations. In this paper, we propose a novel knowledge graph embedding model named TransET, which takes advantage of entity types to learn more semantic features. More specifically, circle convolution based on the embeddings of entity and entity types is utilized to map head entity and tail entity to type-specific representations, then translation-based score function is used to learn the presentation triples. We evaluated our model on real-world datasets with two benchmark tasks of link prediction and triple classification. Experimental results demonstrate that it outperforms state-of-the-art models in most cases.


2020 ◽  
Vol 34 (03) ◽  
pp. 3065-3072 ◽  
Author(s):  
Zhanqiu Zhang ◽  
Jianyu Cai ◽  
Yongdong Zhang ◽  
Jie Wang

Knowledge graph embedding, which aims to represent entities and relations as low dimensional vectors (or matrices, tensors, etc.), has been shown to be a powerful technique for predicting missing links in knowledge graphs. Existing knowledge graph embedding models mainly focus on modeling relation patterns such as symmetry/antisymmetry, inversion, and composition. However, many existing approaches fail to model semantic hierarchies, which are common in real-world applications. To address this challenge, we propose a novel knowledge graph embedding model—namely, Hierarchy-Aware Knowledge Graph Embedding (HAKE)—which maps entities into the polar coordinate system. HAKE is inspired by the fact that concentric circles in the polar coordinate system can naturally reflect the hierarchy. Specifically, the radial coordinate aims to model entities at different levels of the hierarchy, and entities with smaller radii are expected to be at higher levels; the angular coordinate aims to distinguish entities at the same level of the hierarchy, and these entities are expected to have roughly the same radii but different angles. Experiments demonstrate that HAKE can effectively model the semantic hierarchies in knowledge graphs, and significantly outperforms existing state-of-the-art methods on benchmark datasets for the link prediction task.


2020 ◽  
Vol 34 (03) ◽  
pp. 2774-2781
Author(s):  
Feihu Che ◽  
Dawei Zhang ◽  
Jianhua Tao ◽  
Mingyue Niu ◽  
Bocheng Zhao

We study the task of learning entity and relation embeddings in knowledge graphs for predicting missing links. Previous translational models on link prediction make use of translational properties but lack enough expressiveness, while the convolution neural network based model (ConvE) takes advantage of the great nonlinearity fitting ability of neural networks but overlooks translational properties. In this paper, we propose a new knowledge graph embedding model called ParamE which can utilize the two advantages together. In ParamE, head entity embeddings, relation embeddings and tail entity embeddings are regarded as the input, parameters and output of a neural network respectively. Since parameters in networks are effective in converting input to output, taking neural network parameters as relation embeddings makes ParamE much more expressive and translational. In addition, the entity and relation embeddings in ParamE are from feature space and parameter space respectively, which is in line with the essence that entities and relations are supposed to be mapped into two different spaces. We evaluate the performances of ParamE on standard FB15k-237 and WN18RR datasets, and experiments show ParamE can significantly outperform existing state-of-the-art models, such as ConvE, SACN, RotatE and D4-STE/Gumbel.


Author(s):  
Wanhua Cao ◽  
Yi Zhang ◽  
Juntao Liu ◽  
Ziyun Rao

Knowledge graph embedding improves the performance of relation extraction and knowledge reasoning by encoding entities and relationships in low-dimensional semantic space. During training, negative samples are usually constructed by replacing the head/tail entity. And the different replacing relationships lead to different accuracy of the prediction results. This paper develops a negative triplets construction framework according to the frequency of relational association entities. The proposed construction framework can fully consider the quantitative of relations and entities in the dataset to assign the proportion of relation and entity replacement and the frequency of the entities associated with each relationship to set reasonable proportions for different relations. To verify the validity of the proposed construction framework, it is integrated into the state-of-the-art knowledge graph embedding models, such as TransE, TransH, DistMult, ComplEx, and Analogy. And both the evaluation criteria of relation prediction and entity prediction are used to evaluate the performance of link prediction more comprehensively. The experimental results on two commonly used datasets, WN18 and FB15K, show that the proposed method improves entity link and triplet classification accuracy, especially the accuracy of relational link prediction.


2021 ◽  
Author(s):  
Shensi Wang ◽  
Kun Fu ◽  
Xian Sun ◽  
Zequn Zhang ◽  
Shuchao Li ◽  
...  

2020 ◽  
Author(s):  
Quan Do ◽  
Pierre Larmande

AbstractCandidate genes prioritization allows to rank among a large number of genes, those that are strongly associated with a phenotype or a disease. Due to the important amount of data that needs to be integrate and analyse, gene-to-phenotype association is still a challenging task. In this paper, we evaluated a knowledge graph approach combined with embedding methods to overcome these challenges. We first introduced a dataset of rice genes created from several open-access databases. Then, we used the Translating Embedding model and Convolution Knowledge Base model, to vectorize gene information. Finally, we evaluated the results using link prediction performance and vectors representation using some unsupervised learning techniques.


Algorithms ◽  
2019 ◽  
Vol 12 (12) ◽  
pp. 265 ◽  
Author(s):  
Jindou Zhang ◽  
Jing Li

Combining first order logic rules with a Knowledge Graph (KG) embedding model has recently gained increasing attention, as rules introduce rich background information. Among such studies, models equipped with soft rules, which are extracted with certain confidences, achieve state-of-the-art performance. However, the existing methods either cannot support the transitivity and composition rules or take soft rules as regularization terms to constrain derived facts, which is incapable of encoding the logical background knowledge about facts contained in soft rules. In addition, previous works performed one time logical inference over rules to generate valid groundings for modeling rules, ignoring forward chaining inference, which can further generate more valid groundings to better model rules. To these ends, this paper proposes Soft Logical rules enhanced Embedding (SoLE), a novel KG embedding model equipped with a joint training algorithm over soft rules and KG facts to inject the logical background knowledge of rules into embeddings, as well as forward chaining inference over rules. Evaluations on Freebase and DBpedia show that SoLE not only achieves improvements of 11.6%/5.9% in Mean Reciprocal Rank (MRR) and 18.4%/15.9% in HITS@1 compared to the model on which SoLE is based, but also significantly and consistently outperforms the state-of-the-art baselines in the link prediction task.


Author(s):  
Neil Veira ◽  
Brian Keng ◽  
Kanchana Padmanabhan ◽  
Andreas Veneris

Knowledge graph embeddings are instrumental for representing and learning from multi-relational data, with recent embedding models showing high effectiveness for inferring new facts from existing databases. However, such precisely structured data is usually limited in quantity and in scope. Therefore, to fully optimize the embeddings it is important to also consider more widely available sources of information such as text. This paper describes an unsupervised approach to incorporate textual information by augmenting entity embeddings with embeddings of associated words. The approach does not modify the optimization objective for the knowledge graph embedding, which allows it to be integrated with existing embedding models. Two distinct forms of textual data are considered, with different embedding enhancements proposed for each case. In the first case, each entity has an associated text document that describes it. In the second case, a text document is not available, and instead entities occur as words or phrases in an unstructured corpus of text fragments. Experiments show that both methods can offer improvement on the link prediction task when applied to many different knowledge graph embedding models.


Sign in / Sign up

Export Citation Format

Share Document