scholarly journals Dirichlet Process Prior for Student’s t Graph Variational Autoencoders

2021 ◽  
Vol 13 (3) ◽  
pp. 75
Author(s):  
Yuexuan Zhao ◽  
Jing Huang

Graph variational auto-encoder (GVAE) is a model that combines neural networks and Bayes methods, capable of deeper exploring the influential latent features of graph reconstruction. However, several pieces of research based on GVAE employ a plain prior distribution for latent variables, for instance, standard normal distribution (N(0,1)). Although this kind of simple distribution has the advantage of convenient calculation, it will also make latent variables contain relatively little helpful information. The lack of adequate expression of nodes will inevitably affect the process of generating graphs, which will eventually lead to the discovery of only external relations and the neglect of some complex internal correlations. In this paper, we present a novel prior distribution for GVAE, called Dirichlet process (DP) construction for Student’s t (St) distribution. The DP allows the latent variables to adapt their complexity during learning and then cooperates with heavy-tailed St distribution to approach sufficient node representation. Experimental results show that this method can achieve a relatively better performance against the baselines.

2021 ◽  
Vol 13 (2) ◽  
pp. 51
Author(s):  
Lili Sun ◽  
Xueyan Liu ◽  
Min Zhao ◽  
Bo Yang

Variational graph autoencoder, which can encode structural information and attribute information in the graph into low-dimensional representations, has become a powerful method for studying graph-structured data. However, most existing methods based on variational (graph) autoencoder assume that the prior of latent variables obeys the standard normal distribution which encourages all nodes to gather around 0. That leads to the inability to fully utilize the latent space. Therefore, it becomes a challenge on how to choose a suitable prior without incorporating additional expert knowledge. Given this, we propose a novel noninformative prior-based interpretable variational graph autoencoder (NPIVGAE). Specifically, we exploit the noninformative prior as the prior distribution of latent variables. This prior enables the posterior distribution parameters to be almost learned from the sample data. Furthermore, we regard each dimension of a latent variable as the probability that the node belongs to each block, thereby improving the interpretability of the model. The correlation within and between blocks is described by a block–block correlation matrix. We compare our model with state-of-the-art methods on three real datasets, verifying its effectiveness and superiority.


2020 ◽  
Author(s):  
Ahmad Sudi Pratikno

In statistics, there are various terms that may feel unfamiliar to researcher who is not accustomed to discussing it. However, despite all of many functions and benefits that we can get as researchers to process data, it will later be interpreted into a conclusion. And then researcher can digest and understand the research findings. The distribution of continuous random opportunities illustrates obtaining opportunities with some detection of time, weather, and other data obtained from the field. The standard normal distribution represents a stable curve with zero mean and standard deviation 1, while the t distribution is used as a statistical test in the hypothesis test. Chi square deals with the comparative test on two variables with a nominal data scale, while the f distribution is often used in the ANOVA test and regression analysis.


Symmetry ◽  
2018 ◽  
Vol 10 (12) ◽  
pp. 724 ◽  
Author(s):  
Jimmy Reyes ◽  
Inmaculada Barranco-Chamorro ◽  
Diego Gallardo ◽  
Héctor Gómez

In this paper, a generalization of the modified slash Birnbaum–Saunders (BS) distribution is introduced. The model is defined by using the stochastic representation of the BS distribution, where the standard normal distribution is replaced by a symmetric distribution proposed by Reyes et al. It is proved that this new distribution is able to model more kurtosis than other extensions of BS previously proposed in the literature. Closed expressions are given for the pdf (probability density functio), along with their moments, skewness and kurtosis coefficients. Inference carried out is based on modified moments method and maximum likelihood (ML). To obtain ML estimates, two approaches are considered: Newton–Raphson and EM-algorithm. Applications reveal that it has potential for doing well in real problems.


2018 ◽  
Vol 11 (3) ◽  
pp. 52 ◽  
Author(s):  
Mark Jensen ◽  
John Maheu

In this paper, we let the data speak for itself about the existence of volatility feedback and the often debated risk–return relationship. We do this by modeling the contemporaneous relationship between market excess returns and log-realized variances with a nonparametric, infinitely-ordered, mixture representation of the observables’ joint distribution. Our nonparametric estimator allows for deviation from conditional Gaussianity through non-zero, higher ordered, moments, like asymmetric, fat-tailed behavior, along with smooth, nonlinear, risk–return relationships. We use the parsimonious and relatively uninformative Bayesian Dirichlet process prior to overcoming the problem of having too many unknowns and not enough observations. Applying our Bayesian nonparametric model to more than a century’s worth of monthly US stock market returns and realized variances, we find strong, robust evidence of volatility feedback. Once volatility feedback is accounted for, we find an unambiguous positive, nonlinear, relationship between expected excess returns and expected log-realized variance. In addition to the conditional mean, volatility feedback impacts the entire joint distribution.


Sign in / Sign up

Export Citation Format

Share Document