poisson mixture model
Recently Published Documents


TOTAL DOCUMENTS

27
(FIVE YEARS 2)

H-INDEX

8
(FIVE YEARS 0)





2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Jocelyn Mazarura ◽  
Alta de Waal ◽  
Pieter de Villiers

Most topic models are constructed under the assumption that documents follow a multinomial distribution. The Poisson distribution is an alternative distribution to describe the probability of count data. For topic modelling, the Poisson distribution describes the number of occurrences of a word in documents of fixed length. The Poisson distribution has been successfully applied in text classification, but its application to topic modelling is not well documented, specifically in the context of a generative probabilistic model. Furthermore, the few Poisson topic models in the literature are admixture models, making the assumption that a document is generated from a mixture of topics. In this study, we focus on short text. Many studies have shown that the simpler assumption of a mixture model fits short text better. With mixture models, as opposed to admixture models, the generative assumption is that a document is generated from a single topic. One topic model, which makes this one-topic-per-document assumption, is the Dirichlet-multinomial mixture model. The main contributions of this work are a new Gamma-Poisson mixture model, as well as a collapsed Gibbs sampler for the model. The benefit of the collapsed Gibbs sampler derivation is that the model is able to automatically select the number of topics contained in the corpus. The results show that the Gamma-Poisson mixture model performs better than the Dirichlet-multinomial mixture model at selecting the number of topics in labelled corpora. Furthermore, the Gamma-Poisson mixture produces better topic coherence scores than the Dirichlet-multinomial mixture model, thus making it a viable option for the challenging task of topic modelling of short text.



2020 ◽  
Vol 37 (4) ◽  
pp. 045007 ◽  
Author(s):  
Shasvath J Kapadia ◽  
Sarah Caudill ◽  
Jolien D E Creighton ◽  
Will M Farr ◽  
Gregory Mendell ◽  
...  




Author(s):  
Xinmin Zhang ◽  
Manabu Kano ◽  
Masahiro Tani ◽  
Junichi Mori ◽  
Junji Ise ◽  
...  


2019 ◽  
Vol 174 ◽  
pp. 105-116 ◽  
Author(s):  
Katharina Falkner ◽  
Hermine Mitter ◽  
Elena Moltchanova ◽  
Erwin Schmid


2018 ◽  
Vol 1097 ◽  
pp. 012083
Author(s):  
J Rizal ◽  
A Y Gunawan ◽  
S W Indratno ◽  
I Meilano


2018 ◽  
Vol 30 (8) ◽  
pp. 2113-2174 ◽  
Author(s):  
Dennis Forster ◽  
Abdul-Saboor Sheikh ◽  
Jörg Lücke

We explore classifier training for data sets with very few labels. We investigate this task using a neural network for nonnegative data. The network is derived from a hierarchical normalized Poisson mixture model with one observed and two hidden layers. With the single objective of likelihood optimization, both labeled and unlabeled data are naturally incorporated into learning. The neural activation and learning equations resulting from our derivation are concise and local. As a consequence, the network can be scaled using standard deep learning tools for parallelized GPU implementation. Using standard benchmarks for nonnegative data, such as text document representations, MNIST, and NIST SD19, we study the classification performance when very few labels are used for training. In different settings, the network's performance is compared to standard and recently suggested semisupervised classifiers. While other recent approaches are more competitive for many labels or fully labeled data sets, we find that the network studied here can be applied to numbers of few labels where no other system has been reported to operate so far.



Sign in / Sign up

Export Citation Format

Share Document