scholarly journals An Adversarial Transfer Network for Knowledge Representation Learning

Author(s):  
Huijuan Wang ◽  
Shuangyin Li ◽  
Rong Pan
2020 ◽  
Author(s):  
Jing Qian ◽  
Gangmin Li ◽  
Katie Atkinson ◽  
Yong Yue

Knowledge representation learning (KRL) aims at encoding components of a knowledge graph (KG) into a low-dimensional continuous space, which has brought considerable successes in applying deep learning to graph embedding. Most famous KGs contain only positive instances for space efficiency. Typical KRL techniques, especially translational distance-based models, are trained through discriminating positive and negative samples. Thus, negative sampling is unquestionably a non-trivial step in KG embedding. The quality of generated negative samples can directly influence the performance of final knowledge representations in downstream tasks, such as link prediction and triple classification. This review summarizes current negative sampling methods in KRL and we categorize them into three sorts, fixed distribution-based, generative adversarial net (GAN)-based and cluster sampling. Based on this categorization we discuss the most prevalent existing approaches and their characteristics.


Author(s):  
Yu Zhao ◽  
Han Zhou ◽  
Ruobing Xie ◽  
Fuzhen Zhuang ◽  
Qing Li ◽  
...  

Author(s):  
Bo Ouyang ◽  
Wenbing Huang ◽  
Runfa Chen ◽  
Zhixing Tan ◽  
Yang Liu ◽  
...  

Author(s):  
Fulian Yin ◽  
Yanyan Wang ◽  
Jianbo Liu ◽  
Marco Tosato

AbstractThe word similarity task is used to calculate the similarity of any pair of words, and is a basic technology of natural language processing (NLP). The existing method is based on word embedding, which fails to capture polysemy and is greatly influenced by the quality of the corpus. In this paper, we propose a multi-prototype Chinese word representation model (MP-CWR) for word similarity based on synonym knowledge base, including knowledge representation module and word similarity module. For the first module, we propose a dual attention to combine semantic information for jointly learning word knowledge representation. The MP-CWR model utilizes the synonyms as prior knowledge to supplement the relationship between words, which is helpful to solve the challenge of semantic expression due to insufficient data. As for the word similarity module, we propose a multi-prototype representation for each word. Then we calculate and fuse the conceptual similarity of two words to obtain the final result. Finally, we verify the effectiveness of our model on three public data sets with other baseline models. In addition, the experiments also prove the stability and scalability of our MP-CWR model under different corpora.


Author(s):  
Ruobing Xie ◽  
Zhiyuan Liu ◽  
Huanbo Luan ◽  
Maosong Sun

Entity images could provide significant visual information for knowledge representation learning. Most conventional methods learn knowledge representations merely from structured triples, ignoring rich visual information extracted from entity images. In this paper, we propose a novel Image-embodied Knowledge Representation Learning model (IKRL), where knowledge representations are learned with both triple facts and images. More specifically, we first construct representations for all images of an entity with a neural image encoder. These image representations are then integrated into an aggregated image-based representation via an attention-based method. We evaluate our IKRL models on knowledge graph completion and triple classification. Experimental results demonstrate that our models outperform all baselines on both tasks, which indicates the significance of visual information for knowledge representations and the capability of our models in learning knowledge representations with images.


2021 ◽  
Author(s):  
Wenxing Hong ◽  
Shuyan Li ◽  
Zhiqiang Hu ◽  
Abdur Rasool ◽  
Qingshan Jiang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document