Learning word hierarchical representations with neural networks for document modeling

Author(s):  
Longhui Wang ◽  
Yong Wang ◽  
Yudong Xie
Author(s):  
Yang Liu ◽  
Mirella Lapata

In this paper, we focus on learning structure-aware document representations from data without recourse to a discourse parser or additional annotations. Drawing inspiration from recent efforts to empower neural networks with a structural bias (Cheng et al., 2016; Kim et al., 2017), we propose a model that can encode a document while automatically inducing rich structural dependencies. Specifically, we embed a differentiable non-projective parsing algorithm into a neural model and use attention mechanisms to incorporate the structural biases. Experimental evaluations across different tasks and datasets show that the proposed model achieves state-of-the-art results on document modeling tasks while inducing intermediate structures which are both interpretable and meaningful.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 4082-4084

Deep learning methods are used to study hierarchical representations of data. Natural Language Processing is a group of computing methodologies used for analyzing and illustrating of Natural Language (NL). Natural Language is used to collect and present information in numerous fields. NLP can be to extract and process information in human language automatically. This paper is to highlight vital research contributions in text analysis, classification and extracting useful information using NLP


2018 ◽  
Vol 18 (10) ◽  
pp. 353
Author(s):  
Matthew Hill ◽  
Connor Parde ◽  
Jun-Cheng Chen ◽  
Carlos Castillo ◽  
Volker Blanz ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document