scholarly journals Sequential Models for Text Classification Using Recurrent Neural Network

Author(s):  
Winda Kurnia SARI ◽  
Dian Palupi RINI ◽  
Reza Firsandaya MALIK ◽  
Iman Saladin B. AZHAR
2021 ◽  
Vol 3 (4) ◽  
pp. 922-945
Author(s):  
Shaw-Hwa Lo ◽  
Yiqiao Yin

Text classification is a fundamental language task in Natural Language Processing. A variety of sequential models are capable of making good predictions, yet there is a lack of connection between language semantics and prediction results. This paper proposes a novel influence score (I-score), a greedy search algorithm, called Backward Dropping Algorithm (BDA), and a novel feature engineering technique called the “dagger technique”. First, the paper proposes to use the novel influence score (I-score) to detect and search for the important language semantics in text documents that are useful for making good predictions in text classification tasks. Next, a greedy search algorithm, called the Backward Dropping Algorithm, is proposed to handle long-term dependencies in the dataset. Moreover, the paper proposes a novel engineering technique called the “dagger technique” that fully preserves the relationship between the explanatory variable and the response variable. The proposed techniques can be further generalized into any feed-forward Artificial Neural Networks (ANNs) and Convolutional Neural Networks (CNNs), and any neural network. A real-world application on the Internet Movie Database (IMDB) is used and the proposed methods are applied to improve prediction performance with an 81% error reduction compared to other popular peers if I-score and “dagger technique” are not implemented.


2021 ◽  
Vol 32 (4) ◽  
pp. 65-82
Author(s):  
Shengfei Lyu ◽  
Jiaqi Liu

Recurrent neural network (RNN) and convolutional neural network (CNN) are two prevailing architectures used in text classification. Traditional approaches combine the strengths of these two networks by straightly streamlining them or linking features extracted from them. In this article, a novel approach is proposed to maintain the strengths of RNN and CNN to a great extent. In the proposed approach, a bi-directional RNN encodes each word into forward and backward hidden states. Then, a neural tensor layer is used to fuse bi-directional hidden states to get word representations. Meanwhile, a convolutional neural network is utilized to learn the importance of each word for text classification. Empirical experiments are conducted on several datasets for text classification. The superior performance of the proposed approach confirms its effectiveness.


2021 ◽  
Vol 9 (34) ◽  
pp. 89-102
Author(s):  
SUBHRANIL SOM ◽  
Nidhi Chandra ◽  
Laxmi Ahuja ◽  
Sunil Kumar Khatri ◽  
SUBHRANIL SOM ◽  
...  

Author(s):  
Yongqing Wang ◽  
Huawei Shen ◽  
Shenghua Liu ◽  
Jinhua Gao ◽  
Xueqi Cheng

An ability of modeling and predicting the cascades of resharing is crucial to understanding information propagation and to launching campaign of viral marketing. Conventional methods for cascade prediction heavily depend on the hypothesis of diffusion models, e.g., independent cascade model and linear threshold model. Recently, researchers attempt to circumvent the problem of cascade prediction using sequential models (e.g., recurrent neural network, namely RNN) that do not require knowing the underlying diffusion model. Existing sequential models employ a chain structure to capture the memory effect. However, for cascade prediction, each cascade generally corresponds to a diffusion tree, causing cross-dependence in cascade---one sharing behavior could be triggered by its non-immediate predecessor in the memory chain. In this paper, we propose to an attention-based RNN to capture the cross-dependence in cascade. Furthermore, we introduce a \emph{coverage} strategy to combat the misallocation of attention caused by the memoryless of traditional attention mechanism. Extensive experiments on both synthetic and real world datasets demonstrate the proposed models outperform state-of-the-art models at both cascade prediction and inferring diffusion tree.


2019 ◽  
Vol 119 ◽  
pp. 299-312
Author(s):  
Hoon-Keng Poon ◽  
Wun-She Yap ◽  
Yee-Kai Tee ◽  
Wai-Kong Lee ◽  
Bok-Min Goi

Sign in / Sign up

Export Citation Format

Share Document