Improving Relation Extraction by Knowledge Representation Learning

Author(s):  
Wenxing Hong ◽  
Shuyan Li ◽  
Zhiqiang Hu ◽  
Abdur Rasool ◽  
Qingshan Jiang ◽  
...  
2020 ◽  
Author(s):  
Jing Qian ◽  
Gangmin Li ◽  
Katie Atkinson ◽  
Yong Yue

Knowledge representation learning (KRL) aims at encoding components of a knowledge graph (KG) into a low-dimensional continuous space, which has brought considerable successes in applying deep learning to graph embedding. Most famous KGs contain only positive instances for space efficiency. Typical KRL techniques, especially translational distance-based models, are trained through discriminating positive and negative samples. Thus, negative sampling is unquestionably a non-trivial step in KG embedding. The quality of generated negative samples can directly influence the performance of final knowledge representations in downstream tasks, such as link prediction and triple classification. This review summarizes current negative sampling methods in KRL and we categorize them into three sorts, fixed distribution-based, generative adversarial net (GAN)-based and cluster sampling. Based on this categorization we discuss the most prevalent existing approaches and their characteristics.


2021 ◽  
Vol 41 (2) ◽  
pp. 3603-3613
Author(s):  
Jin Dong ◽  
Jian Wang ◽  
Sen Chen

Manufacturing industry is the foundation of a country’s economic development and prosperity. At present, the data in manufacturing enterprises have the problems of weak correlation and high redundancy, which can be solved effectively by knowledge graph. In this paper, a method of knowledge graph construction in manufacturing domain based on knowledge enhanced word embedding model is proposed. The main contributions are as follows: (1) At the algorithmic level, this paper proposes KEWE-BERT, an end-to-end model for joint entity and relation extraction, which superimposes the token embedding and knowledge embedding output by BERT and TransR so as to improve the effect of knowledge extraction; (2) At the application level, knowledge representation model ManuOnto and dataset ManuDT are constructed based on real manufacturing scenarios, and KEWE-BERT is used to construct knowledge graph from them. The knowledge graph constructed has rich semantic relations, which can be applied in actual production environment. Other than that, KEWE-BERT can extract effective knowledge and patterns from redundant texts in the enterprise, which providing a solution for enterprise data management.


Author(s):  
Yu Zhao ◽  
Han Zhou ◽  
Ruobing Xie ◽  
Fuzhen Zhuang ◽  
Qing Li ◽  
...  

Author(s):  
Bo Ouyang ◽  
Wenbing Huang ◽  
Runfa Chen ◽  
Zhixing Tan ◽  
Yang Liu ◽  
...  

Author(s):  
Fulian Yin ◽  
Yanyan Wang ◽  
Jianbo Liu ◽  
Marco Tosato

AbstractThe word similarity task is used to calculate the similarity of any pair of words, and is a basic technology of natural language processing (NLP). The existing method is based on word embedding, which fails to capture polysemy and is greatly influenced by the quality of the corpus. In this paper, we propose a multi-prototype Chinese word representation model (MP-CWR) for word similarity based on synonym knowledge base, including knowledge representation module and word similarity module. For the first module, we propose a dual attention to combine semantic information for jointly learning word knowledge representation. The MP-CWR model utilizes the synonyms as prior knowledge to supplement the relationship between words, which is helpful to solve the challenge of semantic expression due to insufficient data. As for the word similarity module, we propose a multi-prototype representation for each word. Then we calculate and fuse the conceptual similarity of two words to obtain the final result. Finally, we verify the effectiveness of our model on three public data sets with other baseline models. In addition, the experiments also prove the stability and scalability of our MP-CWR model under different corpora.


Sign in / Sign up

Export Citation Format

Share Document