variational bayesian
Recently Published Documents


TOTAL DOCUMENTS

747
(FIVE YEARS 263)

H-INDEX

36
(FIVE YEARS 8)

Entropy ◽  
2022 ◽  
Vol 24 (1) ◽  
pp. 117
Author(s):  
Xuyou Li ◽  
Yanda Guo ◽  
Qingwen Meng

The maximum correntropy Kalman filter (MCKF) is an effective algorithm that was proposed to solve the non-Gaussian filtering problem for linear systems. Compared with the original Kalman filter (KF), the MCKF is a sub-optimal filter with Gaussian correntropy objective function, which has been demonstrated to have excellent robustness to non-Gaussian noise. However, the performance of MCKF is affected by its kernel bandwidth parameter, and a constant kernel bandwidth may lead to severe accuracy degradation in non-stationary noises. In order to solve this problem, the mixture correntropy method is further explored in this work, and an improved maximum mixture correntropy KF (IMMCKF) is proposed. By derivation, the random variables that obey Beta-Bernoulli distribution are taken as intermediate parameters, and a new hierarchical Gaussian state-space model was established. Finally, the unknown mixing probability and state estimation vector at each moment are inferred via a variational Bayesian approach, which provides an effective solution to improve the applicability of MCKFs in non-stationary noises. Performance evaluations demonstrate that the proposed filter significantly improves the existing MCKFs in non-stationary noises.


2021 ◽  
pp. 1-13
Author(s):  
Dangguo Shao ◽  
Chengyao Li ◽  
Chusheng Huang ◽  
Qing An ◽  
Yan Xiang ◽  
...  

Aiming at the low effectiveness of short texts feature extraction, this paper proposes a short texts classification model based on the improved Wasserstein-Latent Dirichlet Allocation (W-LDA), which is a neural network topic model based on the Wasserstein Auto-Encoder (WAE) framework. The improvements of W-LDA are as follows: Firstly, the Bag of Words (BOW) input in the W-LDA is preprocessed by Term Frequency–Inverse Document Frequency (TF-IDF); Subsequently, the prior distribution of potential topics in W-LDA is replaced from the Dirichlet distribution to the Gaussian mixture distribution, which is based on the Variational Bayesian inference; And then the sparsemax function layer is introduced after the hidden layer inferred by the encoder network to generate a sparse document-topic distribution with better topic relevance, the improved W-LDA is named the Sparse Wasserstein-Variational Bayesian Gaussian mixture model (SW-VBGMM); Finally, the document-topic distribution generated by SW-VBGMM is input to BiGRU (Bidirectional Gating Recurrent Unit) for the deep feature extraction and the short texts classification. Experiments on three Chinese short texts datasets and one English dataset represent that our model is better than some common topic models and neural network models in the four evaluation indexes (accuracy, precision, recall, F1 value) of text classification.


Author(s):  
Zhiyuan Wu ◽  
Ning Liu ◽  
Guodong Li ◽  
Xinyu Liu ◽  
Yue Wang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document