Enhancing Joint Entity and Relation Extraction with Language Modeling and Hierarchical Attention

Author(s):  
Renjun Chi ◽  
Bin Wu ◽  
Linmei Hu ◽  
Yunlei Zhang
Author(s):  
Devendra Singh Sachan ◽  
Manzil Zaheer ◽  
Ruslan Salakhutdinov

In this paper, we study bidirectional LSTM network for the task of text classification using both supervised and semisupervised approaches. Several prior works have suggested that either complex pretraining schemes using unsupervised methods such as language modeling (Dai and Le 2015; Miyato, Dai, and Goodfellow 2016) or complicated models (Johnson and Zhang 2017) are necessary to achieve a high classification accuracy. However, we develop a training strategy that allows even a simple BiLSTM model, when trained with cross-entropy loss, to achieve competitive results compared with more complex approaches. Furthermore, in addition to cross-entropy loss, by using a combination of entropy minimization, adversarial, and virtual adversarial losses for both labeled and unlabeled data, we report state-of-theart results for text classification task on several benchmark datasets. In particular, on the ACL-IMDB sentiment analysis and AG-News topic classification datasets, our method outperforms current approaches by a substantial margin. We also show the generality of the mixed objective function by improving the performance on relation extraction task.1


Author(s):  
Prachi Jain ◽  
Shikhar Murty ◽  
Mausam . ◽  
Soumen Chakrabarti

This paper analyzes the varied performance of Matrix Factorization (MF) on the related tasks of relation extraction and knowledge-base completion, which have been unified recently into a single framework of knowledge-base inference (KBI) [Toutanova et al., 2015]. We first propose a new evaluation protocol that makes comparisons between MF and Tensor Factorization (TF) models fair. We find that this results in a steep drop in MF performance. Our analysis attributes this to the high out-of-vocabulary (OOV) rate of entity pairs in test folds of commonly-used datasets. To alleviate this issue, we propose three extensions to MF. Our best model is a TF-augmented MF model. This hybrid model is robust and obtains strong results across various KBI datasets.


2015 ◽  
Author(s):  
Thilo Michael ◽  
Alan Akbik
Keyword(s):  

2014 ◽  
Author(s):  
Miao Fan ◽  
Deli Zhao ◽  
Qiang Zhou ◽  
Zhiyuan Liu ◽  
Thomas Fang Zheng ◽  
...  

2009 ◽  
Vol 19 (11) ◽  
pp. 2843-2852 ◽  
Author(s):  
Jin-Xiu CHEN ◽  
Dong-Hong JI
Keyword(s):  

2012 ◽  
Vol 23 (10) ◽  
pp. 2572-2585 ◽  
Author(s):  
Yu CHEN ◽  
De-Quan ZHENG ◽  
Tie-Jun ZHAO
Keyword(s):  

1994 ◽  
Author(s):  
R. Schwartz ◽  
L. Nguyen ◽  
F. Kubala ◽  
G. CHou ◽  
G. Zavaliagkos ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document