Is Visual Context Really Helpful for Knowledge Graph? A Representation Learning Perspective

2021 ◽  
Author(s):  
Meng Wang ◽  
Sen Wang ◽  
Han Yang ◽  
Zheng Zhang ◽  
Xi Chen ◽  
...  
Author(s):  
Bo Wang ◽  
Tao Shen ◽  
Guodong Long ◽  
Tianyi Zhou ◽  
Ying Wang ◽  
...  

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 32816-32825
Author(s):  
Seungmin Seo ◽  
Byungkook Oh ◽  
Kyong-Ho Lee

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Luogeng Tian ◽  
Bailong Yang ◽  
Xinli Yin ◽  
Kai Kang ◽  
Jing Wu

In the past, most of the entity prediction methods based on embedding lacked the training of local core relationships, resulting in a deficiency in the end-to-end training. Aiming at this problem, we propose an end-to-end knowledge graph embedding representation method. It involves local graph convolution and global cross learning in this paper, which is called the TransC graph convolutional network (TransC-GCN). Firstly, multiple local semantic spaces are divided according to the largest neighbor. Secondly, a translation model is used to map the local entities and relationships into a cross vector, which serves as the input of GCN. Thirdly, through training and learning of local semantic relations, the best entities and strongest relations are found. The optimal entity relation combination ranking is obtained by evaluating the posterior loss function based on the mutual information entropy. Experiments show that this paper can obtain local entity feature information more accurately through the convolution operation of the lightweight convolutional neural network. Also, the maximum pooling operation helps to grasp the strong signal on the local feature, thereby avoiding the globally redundant feature. Compared with the mainstream triad prediction baseline model, the proposed algorithm can effectively reduce the computational complexity while achieving strong robustness. It also increases the inference accuracy of entities and relations by 8.1% and 4.4%, respectively. In short, this new method can not only effectively extract the local nodes and relationship features of the knowledge graph but also satisfy the requirements of multilayer penetration and relationship derivation of a knowledge graph.


Author(s):  
Ruobing Xie ◽  
Zhiyuan Liu ◽  
Huanbo Luan ◽  
Maosong Sun

Entity images could provide significant visual information for knowledge representation learning. Most conventional methods learn knowledge representations merely from structured triples, ignoring rich visual information extracted from entity images. In this paper, we propose a novel Image-embodied Knowledge Representation Learning model (IKRL), where knowledge representations are learned with both triple facts and images. More specifically, we first construct representations for all images of an entity with a neural image encoder. These image representations are then integrated into an aggregated image-based representation via an attention-based method. We evaluate our IKRL models on knowledge graph completion and triple classification. Experimental results demonstrate that our models outperform all baselines on both tasks, which indicates the significance of visual information for knowledge representations and the capability of our models in learning knowledge representations with images.


Sign in / Sign up

Export Citation Format

Share Document