scholarly journals Improving maximum likelihood estimation using prior probabilities: A tutorial on maximum a posteriori estimation and an examination of the weibull distribution

2013 ◽  
Vol 9 (2) ◽  
pp. 61-71 ◽  
Author(s):  
Denis Cousineau ◽  
Sébastien Hélie
Author(s):  
Shoaib Jameel ◽  
Zihao Fu ◽  
Bei Shi ◽  
Wai Lam ◽  
Steven Schockaert

The GloVe word embedding model relies on solving a global optimization problem, which can be reformulated as a maximum likelihood estimation problem. In this paper, we propose to generalize this approach to word embedding by considering parametrized variants of the GloVe model and incorporating priors on these parameters. To demonstrate the usefulness of this approach, we consider a word embedding model in which each context word is associated with a corresponding variance, intuitively encoding how informative it is. Using our framework, we can then learn these variances together with the resulting word vectors in a unified way. We experimentally show that the resulting word embedding models outperform GloVe, as well as many popular alternatives.


Sign in / Sign up

Export Citation Format

Share Document