scholarly journals Design and Investigation of Capsule Networks for Sentence Classification

2019 ◽  
Vol 9 (11) ◽  
pp. 2200 ◽  
Author(s):  
Haftu Wedajo Fentaw ◽  
Tae-Hyong Kim

In recent years, convolutional neural networks (CNNs) have been used as an alternative to recurrent neural networks (RNNs) in text processing with promising results. In this paper, we investigated the newly introduced capsule networks (CapsNets), which are getting a lot of attention due to their great performance gains on image analysis more than CNNs, for sentence classification or sentiment analysis in some cases. The results of our experiment show that the proposed well-tuned CapsNet model can be a good, sometimes better and cheaper, substitute of models based on CNNs and RNNs used in sentence classification. In order to investigate whether CapsNets can learn the sequential order of words or not, we performed a number of experiments by reshuffling the test data. Our CapsNet model shows an overall better classification performance and better resistance to adversarial attacks than CNN and RNN models.

2021 ◽  
Author(s):  
Guilherme Zanini Moreira ◽  
Marcelo Romero ◽  
Manassés Ribeiro

After the advent of Web, the number of people who abandoned traditional media channels and started receiving news only through social media has increased. However, this caused an increase of the spread of fake news due to the ease of sharing information. The consequences are various, with one of the main ones being the possible attempts to manipulate public opinion for elections or promotion of movements that can damage rule of law or the institutions that represent it. The objective of this work is to perform fake news detection using Distributed Representations and Recurrent Neural Networks (RNNs). Although fake news detection using RNNs has been already explored in the literature, there is little research on the processing of texts in Portuguese language, which is the focus of this work. For this purpose, distributed representations from texts are generated with three different algorithms (fastText, GloVe and word2vec) and used as input features for a Long Short-term Memory Network (LSTM). The approach is evaluated using a publicly available labelled news dataset. The proposed approach shows promising results for all the three distributed representation methods for feature extraction, with the combination word2vec+LSTM providing the best results. The results of the proposed approach shows a better classification performance when compared to simple architectures, while similar results are obtained when the approach is compared to deeper architectures or more complex methods.


Author(s):  
Xin Li ◽  
Lidong Bing ◽  
Piji Li ◽  
Wai Lam

Target-based sentiment analysis involves opinion target extraction and target sentiment classification. However, most of the existing works usually studied one of these two sub-tasks alone, which hinders their practical use. This paper aims to solve the complete task of target-based sentiment analysis in an end-to-end fashion, and presents a novel unified model which applies a unified tagging scheme. Our framework involves two stacked recurrent neural networks: The upper one predicts the unified tags to produce the final output results of the primary target-based sentiment analysis; The lower one performs an auxiliary target boundary prediction aiming at guiding the upper network to improve the performance of the primary task. To explore the inter-task dependency, we propose to explicitly model the constrained transitions from target boundaries to target sentiment polarities. We also propose to maintain the sentiment consistency within an opinion target via a gate mechanism which models the relation between the features for the current word and the previous word. We conduct extensive experiments on three benchmark datasets and our framework achieves consistently superior results.


2020 ◽  
Vol 1 (2) ◽  
Author(s):  
Sharat Sachin ◽  
Abha Tripathi ◽  
Navya Mahajan ◽  
Shivani Aggarwal ◽  
Preeti Nagrath

Author(s):  
Xiaopeng Li ◽  
Zhourong Chen ◽  
Nevin L. Zhang

Sparse connectivity is an important factor behind the success of convolutional neural networks and recurrent neural networks. In this paper, we consider the problem of learning sparse connectivity for feedforward neural networks (FNNs). The key idea is that a unit should be connected to a small number of units at the next level below that are strongly correlated. We use Chow-Liu's algorithm to learn a tree-structured probabilistic model for the units at the current level, use the tree to identify subsets of units that are strongly correlated, and introduce a new unit with receptive field over the subsets. The procedure is repeated on the new units to build multiple layers of hidden units. The resulting model is called a TRF-net. Empirical results show that, when compared to dense FNNs, TRF-net achieves better or comparable classification performance with much fewer parameters and sparser structures. They are also more interpretable.


Sign in / Sign up

Export Citation Format

Share Document