Soft-self and Hard-cross Graph Attention Network for Knowledge Graph Entity Alignment

2021 ◽  
Vol 231 ◽  
pp. 107415
Author(s):  
Zhihuan Yan ◽  
Rong Peng ◽  
Yaqian Wang ◽  
Weidong Li
IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 20840-20849
Author(s):  
Xiyang Liu ◽  
Huobin Tan ◽  
Qinghong Chen ◽  
Guangyan Lin

Author(s):  
Xingwei Zhu ◽  
Pengpeng Zhao ◽  
Jiajie Xu ◽  
Junhua Fang ◽  
Lei Zhao ◽  
...  

2021 ◽  
Author(s):  
Linyi Ding ◽  
Weijie Yuan ◽  
Kui Meng ◽  
Gongshen Liu

2021 ◽  
pp. 108038
Author(s):  
Zhenghao Zhang ◽  
Jianbin Huang ◽  
Qinglin Tan

2020 ◽  
Vol 32 (18) ◽  
pp. 14963-14973
Author(s):  
Meina Song ◽  
Wen Zhao ◽  
E. HaiHong

Abstract Natural language inference (NLI) is the basic task of many applications such as question answering and paraphrase recognition. Existing methods have solved the key issue of how the NLI model can benefit from external knowledge. Inspired by this, we attempt to further explore the following two problems: (1) how to make better use of external knowledge when the total amount of such knowledge is constant and (2) how to bring external knowledge to the NLI model more conveniently in the application scenario. In this paper, we propose a novel joint training framework that consists of a modified graph attention network, called the knowledge graph attention network, and an NLI model. We demonstrate that the proposed method outperforms the existing method which introduces external knowledge, and we improve the performance of multiple NLI models without additional external knowledge.


2021 ◽  
Author(s):  
Shengchen Jiang ◽  
Hongbin Wang ◽  
Xiang Hou

Abstract The existing methods ignore the adverse effect of knowledge graph incompleteness on knowledge graph embedding. In addition, the complexity and large-scale of knowledge information hinder knowledge graph embedding performance of the classic graph convolutional network. In this paper, we analyzed the structural characteristics of knowledge graph and the imbalance of knowledge information. Complex knowledge information requires that the model should have better learnability, rather than linearly weighted qualitative constraints, so the method of end-to-end relation-enhanced learnable graph self-attention network for knowledge graphs embedding is proposed. Firstly, we construct the relation-enhanced adjacency matrix to consider the incompleteness of the knowledge graph. Secondly, the graph self-attention network is employed to obtain the global encoding and relevance ranking of entity node information. Thirdly, we propose the concept of convolutional knowledge subgraph, it is constructed according to the entity relevance ranking. Finally, we improve the training effect of the convKB model by changing the construction of negative samples to obtain a better reliability score in the decoder. The experimental results based on the data sets FB15k-237 and WN18RR show that the proposed method facilitates more comprehensive representation of knowledge information than the existing methods, in terms of Hits@10 and MRR.


2021 ◽  
pp. 364-375
Author(s):  
Yang Ren ◽  
Xiaoming Wang ◽  
Guangyao Pang ◽  
Yaguang Lin ◽  
Pengfei Wan

Sign in / Sign up

Export Citation Format

Share Document