Feature Similarity Learning Enhanced Knowledge Graph-based Convolutional Networks for Recommendation

Author(s):  
Boya Shi ◽  
Zhong Zheng ◽  
Tao Tian ◽  
Wanru Du
Semantic Web ◽  
2021 ◽  
pp. 1-20
Author(s):  
Pierre Monnin ◽  
Chedy Raïssi ◽  
Amedeo Napoli ◽  
Adrien Coulet

Knowledge graphs are freely aggregated, published, and edited in the Web of data, and thus may overlap. Hence, a key task resides in aligning (or matching) their content. This task encompasses the identification, within an aggregated knowledge graph, of nodes that are equivalent, more specific, or weakly related. In this article, we propose to match nodes within a knowledge graph by (i) learning node embeddings with Graph Convolutional Networks such that similar nodes have low distances in the embedding space, and (ii) clustering nodes based on their embeddings, in order to suggest alignment relations between nodes of a same cluster. We conducted experiments with this approach on the real world application of aligning knowledge in the field of pharmacogenomics, which motivated our study. We particularly investigated the interplay between domain knowledge and GCN models with the two following focuses. First, we applied inference rules associated with domain knowledge, independently or combined, before learning node embeddings, and we measured the improvements in matching results. Second, while our GCN model is agnostic to the exact alignment relations (e.g., equivalence, weak similarity), we observed that distances in the embedding space are coherent with the “strength” of these different relations (e.g., smaller distances for equivalences), letting us considering clustering and distances in the embedding space as a means to suggest alignment relations in our case study.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Xuefei Wu ◽  
Mingjiang Liu ◽  
Bo Xin ◽  
Zhangqing Zhu ◽  
Gang Wang

Zero-shot learning (ZSL) is a powerful and promising learning paradigm for classifying instances that have not been seen in training. Although graph convolutional networks (GCNs) have recently shown great potential for the ZSL tasks, these models cannot adjust the constant connection weights between the nodes in knowledge graph and the neighbor nodes contribute equally to classify the central node. In this study, we apply an attention mechanism to adjust the connection weights adaptively to learn more important information for classifying unseen target nodes. First, we propose an attention graph convolutional network for zero-shot learning (AGCNZ) by integrating the attention mechanism and GCN directly. Then, in order to prevent the dilution of knowledge from distant nodes, we apply the dense graph propagation (DGP) model for the ZSL tasks and propose an attention dense graph propagation model for zero-shot learning (ADGPZ). Finally, we propose a modified loss function with a relaxation factor to further improve the performance of the learned classifier. Experimental results under different pre-training settings verified the effectiveness of the proposed attention-based models for ZSL.


Author(s):  
Max Berrendorf ◽  
Evgeniy Faerman ◽  
Valentyn Melnychuk ◽  
Volker Tresp ◽  
Thomas Seidl

2021 ◽  
Vol 427 ◽  
pp. 118-130
Author(s):  
Zhifei Li ◽  
Hai Liu ◽  
Zhaoli Zhang ◽  
Tingting Liu ◽  
Jiangbo Shu

Author(s):  
Junyu Gao ◽  
Tianzhu Zhang ◽  
Changsheng Xu

Recently, with the ever-growing action categories, zero-shot action recognition (ZSAR) has been achieved by automatically mining the underlying concepts (e.g., actions, attributes) in videos. However, most existing methods only exploit the visual cues of these concepts but ignore external knowledge information for modeling explicit relationships between them. In fact, humans have remarkable ability to transfer knowledge learned from familiar classes to recognize unfamiliar classes. To narrow the knowledge gap between existing methods and humans, we propose an end-to-end ZSAR framework based on a structured knowledge graph, which can jointly model the relationships between action-attribute, action-action, and attribute-attribute. To effectively leverage the knowledge graph, we design a novel Two-Stream Graph Convolutional Network (TS-GCN) consisting of a classifier branch and an instance branch. Specifically, the classifier branch takes the semantic-embedding vectors of all the concepts as input, then generates the classifiers for action categories. The instance branch maps the attribute embeddings and scores of each video instance into an attribute-feature space. Finally, the generated classifiers are evaluated on the attribute features of each video, and a classification loss is adopted for optimizing the whole network. In addition, a self-attention module is utilized to model the temporal information of videos. Extensive experimental results on three realistic action benchmarks Olympic Sports, HMDB51 and UCF101 demonstrate the favorable performance of our proposed framework.


2021 ◽  
pp. 1-12
Author(s):  
Xiaojun Chen ◽  
Ling Ding ◽  
Yang Xiang

Knowledge graph reasoning or completion aims at inferring missing facts based on existing ones in a knowledge graph. In this work, we focus on the problem of open-world knowledge graph reasoning—a task that reasons about entities which are absent from KG at training time (unseen entities). Unfortunately, the performance of most existing reasoning methods on this problem turns out to be unsatisfactory. Recently, some works use graph convolutional networks to obtain the embeddings of unseen entities for prediction tasks. Graph convolutional networks gather information from the entity’s neighborhood, however, they neglect the unequal natures of neighboring nodes. To resolve this issue, we present an attention-based method named as NAKGR, which leverages neighborhood information to generate entities and relations representations. The proposed model is an encoder-decoder architecture. Specifically, the encoder devises an graph attention mechanism to aggregate neighboring nodes’ information with a weighted combination. The decoder employs an energy function to predict the plausibility for each triplets. Benchmark experiments show that NAKGR achieves significant improvements on the open-world reasoning tasks. In addition, our model also performs well on the closed-world reasoning tasks.


2021 ◽  
Vol 11 (16) ◽  
pp. 7734
Author(s):  
Ningyi Mao ◽  
Wenti Huang ◽  
Hai Zhong

Distantly supervised relation extraction is the most popular technique for identifying semantic relation between two entities. Most prior models only focus on the supervision information present in training sentences. In addition to training sentences, external lexical resource and knowledge graphs often contain other relevant prior knowledge. However, relation extraction models usually ignore such readily available information. Moreover, previous works only utilize a selective attention mechanism over sentences to alleviate the impact of noise, they lack the consideration of the implicit interaction between sentences with relation facts. In this paper, (1) a knowledge-guided graph convolutional network is proposed based on the word-level attention mechanism to encode the sentences. It can capture the key words and cue phrases to generate expressive sentence-level features by attending to the relation indicators obtained from the external lexical resource. (2) A knowledge-guided sentence selector is proposed, which explores the semantic and structural information of triples from knowledge graph as sentence-level knowledge attention to distinguish the importance of each individual sentence. Experimental results on two widely used datasets, NYT-FB and GDS, show that our approach is able to efficiently use the prior knowledge from the external lexical resource and knowledge graph to enhance the performance of distantly supervised relation extraction.


Author(s):  
Yongguo Ling ◽  
Zhiming Luo ◽  
Yaojin Lin ◽  
Shaozi Li

The challenges of visible-thermal person re-identification (VT-ReID) lies in the inter-modality discrepancy and the intra-modality variations. An appropriate metric learning plays a crucial role in optimizing the feature similarity between the two modalities. However, most existing metric learning-based methods mainly constrain the similarity between individual instances or class centers, which are inadequate to explore the rich data relationships in the cross-modality data. Besides, most of these methods fail to consider the importance of different pairs, incurring an inefficiency and ineffectiveness of optimization. To address these issues, we propose a Multi-Constraint (MC) similarity learning method that jointly considers the cross-modality relationships from three different aspects, i.e., Instance-to-Instance (I2I), Center-to-Instance (C2I), and Center-to-Center (C2C). Moreover, we devise an Adaptive Weighting Loss (AWL) function to implement the MC efficiently. In the AWL, we first use an adaptive margin pair mining to select informative pairs and then adaptively adjust weights of mined pairs based on their similarity. Finally, the mined and weighted pairs are used for the metric learning. Extensive experiments on two benchmark datasets demonstrate the superior performance of the proposed over the state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document