scholarly journals Universal Rules for Fooling Deep Neural Networks based Text Classification

Author(s):  
Di Li ◽  
Danilo Vasconcellos Vargas ◽  
Sakurai Kouichi
2021 ◽  
Vol 16 (1) ◽  
pp. 1-23
Author(s):  
Keyu Yang ◽  
Yunjun Gao ◽  
Lei Liang ◽  
Song Bian ◽  
Lu Chen ◽  
...  

Text classification is a fundamental task in content analysis. Nowadays, deep learning has demonstrated promising performance in text classification compared with shallow models. However, almost all the existing models do not take advantage of the wisdom of human beings to help text classification. Human beings are more intelligent and capable than machine learning models in terms of understanding and capturing the implicit semantic information from text. In this article, we try to take guidance from human beings to classify text. We propose Crowd-powered learning for Text Classification (CrowdTC for short). We design and post the questions on a crowdsourcing platform to extract keywords in text. Sampling and clustering techniques are utilized to reduce the cost of crowdsourcing. Also, we present an attention-based neural network and a hybrid neural network to incorporate the extracted keywords as human guidance into deep neural networks. Extensive experiments on public datasets confirm that CrowdTC improves the text classification accuracy of neural networks by using the crowd-powered keyword guidance.


2017 ◽  
Vol 23 (5) ◽  
pp. 322-327
Author(s):  
Hwiyeol Jo ◽  
Jin-Hwa Kim ◽  
Kyung-Min Kim ◽  
Jeong-Ho Chang ◽  
Jae-Hong Eom ◽  
...  

Author(s):  
Jinjing Shi ◽  
Zhenhuan Li ◽  
Wei Lai ◽  
Fangfang Li ◽  
Ronghua Shi ◽  
...  

2021 ◽  
Vol 11 (20) ◽  
pp. 9703
Author(s):  
Han-joon Kim ◽  
Pureum Lim

Most text classification systems use machine learning algorithms; among these, naïve Bayes and support vector machine algorithms adapted to handle text data afford reasonable performance. Recently, given developments in deep learning technology, several scholars have used deep neural networks (recurrent and convolutional neural networks) to improve text classification. However, deep learning-based text classification has not greatly improved performance compared to that of conventional algorithms. This is because a textual document is essentially expressed as a vector (only), albeit with word dimensions, which compromises the inherent semantic information, even if the vector is (appropriately) transformed to add conceptual information. To solve this `loss of term senses’ problem, we develop a concept-driven deep neural network based upon our semantic tensor space model. The semantic tensor used for text representation features a dependency between the term and the concept; we use this to develop three deep neural networks for text classification. We perform experiments using three standard document corpora, and we show that our proposed methods are superior to both traditional and more recent learning methods.


2019 ◽  
Vol 292 ◽  
pp. 03009
Author(s):  
Maciej Jankowski

Recent advances in applying deep neural networks to Bayesian Modelling, sparked resurgence of in- terest in Variational Methods. Notably, the main contribution in this area is Reparametrization Trick introduced in [1] and [2]. VAE model [1], is unsupervised and therefore its application to classification is not optimal. In this work, we research the possibility to extend the model to supervised case. We first start with the model known as Supervised Variational Autoencoder that is researched in the literature in various forms [3] and [4]. We then modify objective function in such a way, that latent space can be better fitted to multiclass problem. Finally, we introduce a new method that uses information about classes to modify latent space, so it even better reflects differences between classes. All of this, will use only two dimensions. We will show, that mainstream classifiers applied to such a space, achieve significantly better performance than if applied to original datasets and VAE generated data. We also show, how our novel approach can be used to calculate better classification score, and how it can be used to generate data for a given class.


Author(s):  
Alex Hernández-García ◽  
Johannes Mehrer ◽  
Nikolaus Kriegeskorte ◽  
Peter König ◽  
Tim C. Kietzmann

Sign in / Sign up

Export Citation Format

Share Document