scholarly journals Exploring Encoder-Decoder Model for Distant Supervised Relation Extraction

Author(s):  
Sen Su ◽  
Ningning Jia ◽  
Xiang Cheng ◽  
Shuguang Zhu ◽  
Ruiping Li

In this paper, we present an encoder-decoder model for distant supervised relation extraction. Given an entity pair and its sentence bag as input, in the encoder component, we employ the convolutional neural network to extract the features of the sentences in the sentence bag and merge them into a bag representation. In the decoder component, we utilize the long short-term memory network to model relation dependencies and predict the target relations in a sequential manner. In particular, to enable the sequential prediction of relations, we introduce a measure to quantify the amounts of information the relations take in their sentence bag, and use such information to determine the order of the relations of a sentence bag during model training. Moreover, we incorporate the attention mechanism into our model to dynamically adjust the bag representation to reduce the impact of sentences whose corresponding relations have been predicted. Extensive experiments on a popular dataset show that our model achieves significant improvement over state-of-the-art methods.

2017 ◽  
Vol 29 (7) ◽  
pp. 1964-1985 ◽  
Author(s):  
Dengchao He ◽  
Hongjun Zhang ◽  
Wenning Hao ◽  
Rui Zhang ◽  
Kai Cheng

Distant supervision, a widely applied approach in the field of relation extraction can automatically generate large amounts of labeled training corpus with minimal manual effort. However, the labeled training corpus may have many false-positive data, which would hurt the performance of relation extraction. Moreover, in traditional feature-based distant supervised approaches, extraction models adopt human design features with natural language processing. It may also cause poor performance. To address these two shortcomings, we propose a customized attention-based long short-term memory network. Our approach adopts word-level attention to achieve better data representation for relation extraction without manually designed features to perform distant supervision instead of fully supervised relation extraction, and it utilizes instance-level attention to tackle the problem of false-positive data. Experimental results demonstrate that our proposed approach is effective and achieves better performance than traditional methods.


2021 ◽  
Vol 12 (2) ◽  
pp. 1-18
Author(s):  
Mingfei Teng ◽  
Hengshu Zhu ◽  
Chuanren Liu ◽  
Hui Xiong

As an emerging measure of proactive talent management, talent turnover prediction is critically important for companies to attract, engage, and retain talents in order to prevent the loss of intellectual capital. While tremendous efforts have been made in this direction, it is not clear how to model the influence of employees’ turnover within multiple organizational social networks. In this article, we study how to exploit turnover contagion by developing a Turnover Influence-based Neural Network (TINN) for enhancing organizational turnover prediction. Specifically, TINN can construct the turnover similarity network which is then fused with multiple organizational social networks. The fusion is achieved either through learning a hidden turnover influence network or through integrating the turnover influence on multiple networks. Taking advantage of the Graph Convolutional Network and the Long Short-Term Memory network, TINN can dynamically model the impact of social influence on talent turnover. Meanwhile, the utilization of the attention mechanism improves the interpretability, providing insights into the impact of different networks along time on the future turnovers. Finally, we conduct extensive experiments in real-world settings to evaluate TINN. The results validate the effectiveness of our approach to enhancing organizational turnover prediction. Also, our case studies reveal some interpretable findings, such as the importance of each network or hidden state which potentially impacts future organizational turnovers.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yesol Park ◽  
Joohong Lee ◽  
Heesang Moon ◽  
Yong Suk Choi ◽  
Mina Rho

AbstractWith recent advances in biotechnology and sequencing technology, the microbial community has been intensively studied and discovered to be associated with many chronic as well as acute diseases. Even though a tremendous number of studies describing the association between microbes and diseases have been published, text mining methods that focus on such associations have been rarely studied. We propose a framework that combines machine learning and natural language processing methods to analyze the association between microbes and diseases. A hierarchical long short-term memory network was used to detect sentences that describe the association. For the sentences determined, two different parse tree-based search methods were combined to find the relation-describing word. The ensemble model of constituency parsing for structural pattern matching and dependency-based relation extraction improved the prediction accuracy. By combining deep learning and parse tree-based extractions, our proposed framework could extract the microbe-disease association with higher accuracy. The evaluation results showed that our system achieved an F-score of 0.8764 and 0.8524 in binary decisions and extracting relation words, respectively. As a case study, we performed a large-scale analysis of the association between microbes and diseases. Additionally, a set of common microbes shared by multiple diseases were also identified in this study. This study could provide valuable information for the major microbes that were studied for a specific disease. The code and data are available at https://github.com/DMnBI/mdi_predictor.


2020 ◽  
Vol 17 (169) ◽  
pp. 20200494 ◽  
Author(s):  
A. S. Fokas ◽  
N. Dikaios ◽  
G. A. Kastis

We introduce a novel methodology for predicting the time evolution of the number of individuals in a given country reported to be infected with SARS-CoV-2. This methodology, which is based on the synergy of explicit mathematical formulae and deep learning networks, yields algorithms whose input is only the existing data in the given country of the accumulative number of individuals who are reported to be infected. The analytical formulae involve several constant parameters that were determined from the available data using an error-minimizing algorithm. The same data were also used for the training of a bidirectional long short-term memory network. We applied the above methodology to the epidemics in Italy, Spain, France, Germany, USA and Sweden. The significance of these results for evaluating the impact of easing the lockdown measures is discussed.


2020 ◽  
Vol 34 (01) ◽  
pp. 67-74
Author(s):  
Guibing Guo ◽  
Bowei Chen ◽  
Xiaoyan Zhang ◽  
Zhirong Liu ◽  
Zhenhua Dong ◽  
...  

Paper recommendation is a research topic to provide users with personalized papers of interest. However, most existing approaches equally treat title and abstract as the input to learn the representation of a paper, ignoring their semantic relationship. In this paper, we regard the abstract as a sequence of sentences, and propose a two-level attentive neural network to capture: (1) the ability of each word within a sentence to reflect if it is semantically close to the words within the title. (2) the extent of each sentence in the abstract relative to the title, which is often a good summarization of the abstract document. Specifically, we propose a Long-Short Term Memory (LSTM) network with attention to learn the representation of sentences, and integrate a Gated Recurrent Unit (GRU) network with a memory network to learn the long-term sequential sentence patterns of interacted papers for both user and item (paper) modeling. We conduct extensive experiments on two real datasets, and show that our approach outperforms other state-of-the-art approaches in terms of accuracy.


Sign in / Sign up

Export Citation Format

Share Document