scholarly journals Relational Graph Neural Network with Hierarchical Attention for Knowledge Graph Completion

2020 ◽  
Vol 34 (05) ◽  
pp. 9612-9619
Author(s):  
Zhao Zhang ◽  
Fuzhen Zhuang ◽  
Hengshu Zhu ◽  
Zhiping Shi ◽  
Hui Xiong ◽  
...  

The rapid proliferation of knowledge graphs (KGs) has changed the paradigm for various AI-related applications. Despite their large sizes, modern KGs are far from complete and comprehensive. This has motivated the research in knowledge graph completion (KGC), which aims to infer missing values in incomplete knowledge triples. However, most existing KGC models treat the triples in KGs independently without leveraging the inherent and valuable information from the local neighborhood surrounding an entity. To this end, we propose a Relational Graph neural network with Hierarchical ATtention (RGHAT) for the KGC task. The proposed model is equipped with a two-level attention mechanism: (i) the first level is the relation-level attention, which is inspired by the intuition that different relations have different weights for indicating an entity; (ii) the second level is the entity-level attention, which enables our model to highlight the importance of different neighboring entities under the same relation. The hierarchical attention mechanism makes our model more effective to utilize the neighborhood information of an entity. Finally, we extensively validate the superiority of RGHAT against various state-of-the-art baselines.

2020 ◽  
Vol 34 (01) ◽  
pp. 222-229
Author(s):  
Zequn Sun ◽  
Chengming Wang ◽  
Wei Hu ◽  
Muhao Chen ◽  
Jian Dai ◽  
...  

Graph neural networks (GNNs) have emerged as a powerful paradigm for embedding-based entity alignment due to their capability of identifying isomorphic subgraphs. However, in real knowledge graphs (KGs), the counterpart entities usually have non-isomorphic neighborhood structures, which easily causes GNNs to yield different representations for them. To tackle this problem, we propose a new KG alignment network, namely AliNet, aiming at mitigating the non-isomorphism of neighborhood structures in an end-to-end manner. As the direct neighbors of counterpart entities are usually dissimilar due to the schema heterogeneity, AliNet introduces distant neighbors to expand the overlap between their neighborhood structures. It employs an attention mechanism to highlight helpful distant neighbors and reduce noises. Then, it controls the aggregation of both direct and distant neighborhood information using a gating mechanism. We further propose a relation loss to refine entity representations. We perform thorough experiments with detailed ablation studies and analyses on five entity alignment datasets, demonstrating the effectiveness of AliNet.


Author(s):  
Jiachen Du ◽  
Ruifeng Xu ◽  
Yulan He ◽  
Lin Gui

Stance classification, which aims at detecting the stance expressed in text towards a specific target, is an emerging problem in sentiment analysis. A major difference between stance classification and traditional aspect-level sentiment classification is that the identification of stance is dependent on target which might not be explicitly mentioned in text. This indicates that apart from text content, the target information is important to stance detection. To this end, we propose a neural network-based model, which incorporates target-specific information into stance classification by following a novel attention mechanism. In specific, the attention mechanism is expected to locate the critical parts of text which are related to target. Our evaluations on both the English and Chinese Stance Detection datasets show that the proposed model achieves the state-of-the-art performance.


Author(s):  
Qiannan Zhu ◽  
Xiaofei Zhou ◽  
Jia Wu ◽  
Jianlong Tan ◽  
Li Guo

Multilingual knowledge graphs constructed by entity alignment are the indispensable resources for numerous AI-related applications. Most existing entity alignment methods only use the triplet-based knowledge to find the aligned entities across multilingual knowledge graphs, they usually ignore the neighborhood subgraph knowledge of entities that implies more richer alignment information for aligning entities. In this paper, we incorporate neighborhood subgraph-level information of entities, and propose a neighborhood-aware attentional representation method NAEA for multilingual knowledge graphs. NAEA devises an attention mechanism to learn neighbor-level representation by aggregating neighbors' representations with a weighted combination. The attention mechanism enables entities not only capture different impacts of their neighbors on themselves, but also attend over their neighbors' feature representations with different importance. We evaluate our model on two real-world datasets DBP15K and DWY100K, and the experimental results show that the proposed model NAEA significantly and consistently outperforms state-of-the-art entity alignment models.


2021 ◽  
Vol 15 (3) ◽  
pp. 1-17
Author(s):  
Luyi Bai ◽  
Xiangnan Ma ◽  
Mingcheng Zhang ◽  
Wenting Yu

Temporal knowledge graphs (TKGs) have become useful resources for numerous Artificial Intelligence applications, but they are far from completeness. Inferring missing events in temporal knowledge graphs is a fundamental and challenging task. However, most existing methods solely focus on entity features or consider the entities and relations in a disjoint manner. They do not integrate the features of entities and relations in their modeling process. In this paper, we propose TPmod, a tendency-guided prediction model, to predict the missing events for TKGs (extrapolation). Differing from existing works, we propose two definitions for TKGs: the Goodness of relations and the Closeness of entity pairs. More importantly, inspired by the attention mechanism, we propose a novel tendency strategy to guide our aggregated process. It integrates the features of entities and relations, and assigns varying weights to different past events. What is more, we select the Gate Recurrent Unit (GRU) as our sequential encoder to model the temporal dependency in TKGs. Besides, the Softmax function is employed to generate the final decreasing group of candidate entities. We evaluate our model on two TKG datasets: GDELT-5 and ICEWS-250. Experimental results show that our method has a significant and consistent improvement compared to state-of-the-art baselines.


Electronics ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1407
Author(s):  
Peng Wang ◽  
Jing Zhou ◽  
Yuzhang Liu ◽  
Xingchen Zhou

Knowledge graph embedding aims to embed entities and relations into low-dimensional vector spaces. Most existing methods only focus on triple facts in knowledge graphs. In addition, models based on translation or distance measurement cannot fully represent complex relations. As well-constructed prior knowledge, entity types can be employed to learn the representations of entities and relations. In this paper, we propose a novel knowledge graph embedding model named TransET, which takes advantage of entity types to learn more semantic features. More specifically, circle convolution based on the embeddings of entity and entity types is utilized to map head entity and tail entity to type-specific representations, then translation-based score function is used to learn the presentation triples. We evaluated our model on real-world datasets with two benchmark tasks of link prediction and triple classification. Experimental results demonstrate that it outperforms state-of-the-art models in most cases.


2021 ◽  
Vol 11 (15) ◽  
pp. 7104
Author(s):  
Xu Yang ◽  
Ziyi Huan ◽  
Yisong Zhai ◽  
Ting Lin

Nowadays, personalized recommendation based on knowledge graphs has become a hot spot for researchers due to its good recommendation effect. In this paper, we researched personalized recommendation based on knowledge graphs. First of all, we study the knowledge graphs’ construction method and complete the construction of the movie knowledge graphs. Furthermore, we use Neo4j graph database to store the movie data and vividly display it. Then, the classical translation model TransE algorithm in knowledge graph representation learning technology is studied in this paper, and we improved the algorithm through a cross-training method by using the information of the neighboring feature structures of the entities in the knowledge graph. Furthermore, the negative sampling process of TransE algorithm is improved. The experimental results show that the improved TransE model can more accurately vectorize entities and relations. Finally, this paper constructs a recommendation model by combining knowledge graphs with ranking learning and neural network. We propose the Bayesian personalized recommendation model based on knowledge graphs (KG-BPR) and the neural network recommendation model based on knowledge graphs(KG-NN). The semantic information of entities and relations in knowledge graphs is embedded into vector space by using improved TransE method, and we compare the results. The item entity vectors containing external knowledge information are integrated into the BPR model and neural network, respectively, which make up for the lack of knowledge information of the item itself. Finally, the experimental analysis is carried out on MovieLens-1M data set. The experimental results show that the two recommendation models proposed in this paper can effectively improve the accuracy, recall, F1 value and MAP value of recommendation.


Author(s):  
Navin Tatyaba Gopal ◽  
Anish Raj Khobragade

The Knowledge graphs (KGs) catches structured data and relationships among a bunch of entities and items. Generally, constitute an attractive origin of information that can advance the recommender systems. But, present methodologies of this area depend on manual element thus don’t permit for start to end training. This article proposes, Knowledge Graph along with Label Smoothness (KG-LS) to offer better suggestions for the recommender Systems. Our methodology processes user-specific entities by prior application of a function capability that recognizes key KG-relationships for a specific user. In this manner, we change the KG in a specific-user weighted graph followed by application of a graph neural network to process customized entity embedding. To give better preliminary predisposition, label smoothness comes into picture, which places items in the KG which probably going to have identical user significant names/scores. Use of, label smoothness gives regularization above the edge weights thus; we demonstrate that it is comparable to a label propagation plan on the graph. Additionally building-up a productive usage that symbolizes solid adaptability concerning the size of knowledge graph. Experimentation on 4 datasets shows that our strategy beats best in class baselines. This process likewise accomplishes solid execution in cold start situations where user-entity communications remain meager.


Author(s):  
Noha Ali ◽  
Ahmed H. AbuEl-Atta ◽  
Hala H. Zayed

<span id="docs-internal-guid-cb130a3a-7fff-3e11-ae3d-ad2310e265f8"><span>Deep learning (DL) algorithms achieved state-of-the-art performance in computer vision, speech recognition, and natural language processing (NLP). In this paper, we enhance the convolutional neural network (CNN) algorithm to classify cancer articles according to cancer hallmarks. The model implements a recent word embedding technique in the embedding layer. This technique uses the concept of distributed phrase representation and multi-word phrases embedding. The proposed model enhances the performance of the existing model used for biomedical text classification. The result of the proposed model overcomes the previous model by achieving an F-score equal to 83.87% using an unsupervised technique that trained on PubMed abstracts called PMC vectors (PMCVec) embedding. Also, we made another experiment on the same dataset using the recurrent neural network (RNN) algorithm with two different word embeddings Google news and PMCVec which achieving F-score equal to 74.9% and 76.26%, respectively.</span></span>


Author(s):  
Anastasia Dimou

In this chapter, an overview of the state of the art on knowledge graph generation is provided, with focus on the two prevalent mapping languages: the W3C recommended R2RML and its generalisation RML. We look into details on their differences and explain how knowledge graphs, in the form of RDF graphs, can be generated with each one of the two mapping languages. Then we assess if the vocabulary terms were properly applied to the data and no violations occurred on their use, either using R2RML or RML to generate the desired knowledge graph.


2019 ◽  
Vol 9 (18) ◽  
pp. 3908 ◽  
Author(s):  
Jintae Kim ◽  
Shinhyeok Oh ◽  
Oh-Woog Kwon ◽  
Harksoo Kim

To generate proper responses to user queries, multi-turn chatbot models should selectively consider dialogue histories. However, previous chatbot models have simply concatenated or averaged vector representations of all previous utterances without considering contextual importance. To mitigate this problem, we propose a multi-turn chatbot model in which previous utterances participate in response generation using different weights. The proposed model calculates the contextual importance of previous utterances by using an attention mechanism. In addition, we propose a training method that uses two types of Wasserstein generative adversarial networks to improve the quality of responses. In experiments with the DailyDialog dataset, the proposed model outperformed the previous state-of-the-art models based on various performance measures.


Sign in / Sign up

Export Citation Format

Share Document