Deep Generative Model with Supervised Latent Space for Text Classification
Recent advances in applying deep neural networks to Bayesian Modelling, sparked resurgence of in- terest in Variational Methods. Notably, the main contribution in this area is Reparametrization Trick introduced in [1] and [2]. VAE model [1], is unsupervised and therefore its application to classification is not optimal. In this work, we research the possibility to extend the model to supervised case. We first start with the model known as Supervised Variational Autoencoder that is researched in the literature in various forms [3] and [4]. We then modify objective function in such a way, that latent space can be better fitted to multiclass problem. Finally, we introduce a new method that uses information about classes to modify latent space, so it even better reflects differences between classes. All of this, will use only two dimensions. We will show, that mainstream classifiers applied to such a space, achieve significantly better performance than if applied to original datasets and VAE generated data. We also show, how our novel approach can be used to calculate better classification score, and how it can be used to generate data for a given class.