latent distribution
Recently Published Documents


TOTAL DOCUMENTS

33
(FIVE YEARS 19)

H-INDEX

7
(FIVE YEARS 3)

Author(s):  
Mei Li ◽  
Jiajun Zhang ◽  
Xiang Lu ◽  
Chengqing Zong

Emotional dialogue generation aims to generate appropriate responses that are content relevant with the query and emotion consistent with the given emotion tag. Previous work mainly focuses on incorporating emotion information into the sequence to sequence or conditional variational auto-encoder (CVAE) models, and they usually utilize the given emotion tag as a conditional feature to influence the response generation process. However, emotion tag as a feature cannot well guarantee the emotion consistency between the response and the given emotion tag. In this article, we propose a novel Dual-View CVAE model to explicitly model the content relevance and emotion consistency jointly. These two views gather the emotional information and the content-relevant information from the latent distribution of responses, respectively. We jointly model the dual-view via VAE to get richer and complementary information. Extensive experiments on both English and Chinese emotion dialogue datasets demonstrate the effectiveness of our proposed Dual-View CVAE model, which significantly outperforms the strong baseline models in both aspects of content relevance and emotion consistency.


2021 ◽  
Vol 7 ◽  
pp. e757
Author(s):  
Artem Ryzhikov ◽  
Maxim Borisyak ◽  
Andrey Ustyuzhanin ◽  
Denis Derkach

Anomaly detection is a challenging task that frequently arises in practically all areas of industry and science, from fraud detection and data quality monitoring to finding rare cases of diseases and searching for new physics. Most of the conventional approaches to anomaly detection, such as one-class SVM and Robust Auto-Encoder, are one-class classification methods, i.e., focus on separating normal data from the rest of the space. Such methods are based on the assumption of separability of normal and anomalous classes, and subsequently do not take into account any available samples of anomalies. Nonetheless, in practical settings, some anomalous samples are often available; however, usually in amounts far lower than required for a balanced classification task, and the separability assumption might not always hold. This leads to an important task—incorporating known anomalous samples into training procedures of anomaly detection models. In this work, we propose a novel model-agnostic training procedure to address this task. We reformulate one-class classification as a binary classification problem with normal data being distinguished from pseudo-anomalous samples. The pseudo-anomalous samples are drawn from low-density regions of a normalizing flow model by feeding tails of the latent distribution into the model. Such an approach allows to easily include known anomalies into the training process of an arbitrary classifier. We demonstrate that our approach shows comparable performance on one-class problems, and, most importantly, achieves comparable or superior results on tasks with variable amounts of known anomalies.


2021 ◽  
Vol 20 (4) ◽  
pp. 463-480
Author(s):  
Takuma Ishihara ◽  
Kouji Yamamoto

AbstractIn clinical trials, two or more binary responses obtained by dichotomizing continuous responses are often employed as multiple primary endpoints. Testing procedures for multiple binary variables with latent distribution have not yet been adequately discussed. Based on the association measure among latent variables, we provide a statistic for testing the superiority of at least one binary endpoint. In addition, we propose a testing procedure with a framework in which the trial efficacy is confirmed only when there is superiority of at least one endpoint and non-inferiority of the remaining endpoints. The performance of the proposed procedure is evaluated through simulations.


2021 ◽  
pp. 1-27
Author(s):  
Tim Sainburg ◽  
Leland McInnes ◽  
Timothy Q. Gentner

Abstract UMAP is a nonparametric graph-based dimensionality reduction algorithm using applied Riemannian geometry and algebraic topology to find low-dimensional embeddings of structured data. The UMAP algorithm consists of two steps: (1) computing a graphical representation of a data set (fuzzy simplicial complex) and (2) through stochastic gradient descent, optimizing a low-dimensional embedding of the graph. Here, we extend the second step of UMAP to a parametric optimization over neural network weights, learning a parametric relationship between data and embedding. We first demonstrate that parametric UMAP performs comparably to its nonparametric counterpart while conferring the benefit of a learned parametric mapping (e.g., fast online embeddings for new data). We then explore UMAP as a regularization, constraining the latent distribution of autoencoders, parametrically varying global structure preservation, and improving classifier accuracy for semisupervised learning by capturing structure in unlabeled data.


Author(s):  
Xuwen Tang ◽  
Zhu Teng ◽  
Baopeng Zhang ◽  
Jianping Fan

Few-shot classification aims to recognize new classes by learning reliable models from very few available samples. It could be very challenging when there is no intersection between the alreadyknown classes (base set) and the novel set (new classes). To alleviate this problem, we propose to evolve the network (for the base set) via label propagation and self-supervision to shrink the distribution difference between the base set and the novel set. Our network evolution approach transfers the latent distribution from the already-known classes to the unknown (novel) classes by: (a) label propagation of the novel/new classes (novel set); and (b) design of dual-task to exploit a discriminative representation to effectively diminish the overfitting on the base set and enhance the generalization ability on the novel set. We conduct comprehensive experiments to examine our network evolution approach against numerous state-of-the-art ones, especially in a higher way setup and cross-dataset scenarios. Notably, our approach outperforms the second best state-of-the-art method by a large margin of 3.25% for one-shot evaluation over miniImageNet.


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 856
Author(s):  
Eleonora Grassucci ◽  
Danilo Comminiello ◽  
Aurelio Uncini

Variational autoencoders are deep generative models that have recently received a great deal of attention due to their ability to model the latent distribution of any kind of input such as images and audio signals, among others. A novel variational autoncoder in the quaternion domain H, namely the QVAE, has been recently proposed, leveraging the augmented second order statics of H-proper signals. In this paper, we analyze the QVAE under an information-theoretic perspective, studying the ability of the H-proper model to approximate improper distributions as well as the built-in H-proper ones and the loss of entropy due to the improperness of the input signal. We conduct experiments on a substantial set of quaternion signals, for each of which the QVAE shows the ability of modelling the input distribution, while learning the improperness and increasing the entropy of the latent space. The proposed analysis will prove that proper QVAEs can be employed with a good approximation even when the quaternion input data are improper.


2021 ◽  
Vol 2 (4) ◽  
Author(s):  
Andrea Asperti ◽  
Davide Evangelista ◽  
Elena Loli Piccolomini

AbstractVariational Autoencoders (VAEs) are powerful generative models that merge elements from statistics and information theory with the flexibility offered by deep neural networks to efficiently solve the generation problem for high-dimensional data. The key insight of VAEs is to learn the latent distribution of data in such a way that new meaningful samples can be generated from it. This approach led to tremendous research and variations in the architectural design of VAEs, nourishing the recent field of research known as unsupervised representation learning. In this article, we provide a comparative evaluation of some of the most successful, recent variations of VAEs. We particularly focus the analysis on the energetic efficiency of the different models, in the spirit of the so-called Green AI, aiming both to reduce the carbon footprint and the financial cost of generative techniques. For each architecture, we provide its mathematical formulation, the ideas underlying its design, a detailed model description, a running implementation and quantitative results.


2021 ◽  
Vol 437 ◽  
pp. 218-226
Author(s):  
Shiwen Kou ◽  
Wei Xia ◽  
Xiangdong Zhang ◽  
Quanxue Gao ◽  
Xinbo Gao
Keyword(s):  

2020 ◽  
Author(s):  
Ravindra Yadav ◽  
Ashish Sardana ◽  
Vinay P. Namboodiri ◽  
Rajesh M. Hegde

Sign in / Sign up

Export Citation Format

Share Document