Semantic and Syntactic Model of Natural Language Based on Tensor Factorization

Author(s):  
Anatoly Anisimov ◽  
Oleksandr Marchenko ◽  
Volodymyr Taranukha ◽  
Taras Vozniuk
2010 ◽  
Vol 16 (4) ◽  
pp. 417-437 ◽  
Author(s):  
TIM VAN DE CRUYS

AbstractThe distributional similarity methods have proven to be a valuable tool for the induction of semantic similarity. Until now, most algorithms use two-way co-occurrence data to compute the meaning of words. Co-occurrence frequencies, however, need not be pairwise. One can easily imagine situations where it is desirable to investigate co-occurrence frequencies of three modes and beyond. This paper will investigate tensor factorization methods to build a model of three-way co-occurrences. The approach is applied to the problem of selectional preference induction, and automatically evaluated in a pseudo-disambiguation task. The results show that tensor factorization, and non-negative tensor factorization in particular, is a promising tool for Natural Language Processing (nlp).


2015 ◽  
Author(s):  
Guillaume Bouchard ◽  
Jason Naradowsky ◽  
Sebastian Riedel ◽  
Tim Rocktäschel ◽  
Andreas Vlachos

1987 ◽  
Vol 32 (1) ◽  
pp. 33-34
Author(s):  
Greg N. Carlson
Keyword(s):  

2014 ◽  
Author(s):  
Sri Siddhi Upadhyay ◽  
Celia Klin
Keyword(s):  

2012 ◽  
Author(s):  
Loes Stukken ◽  
Wouter Voorspoels ◽  
Gert Storms ◽  
Wolf Vanpaemel
Keyword(s):  

2004 ◽  
Author(s):  
Harry E. Blanchard ◽  
Osamuyimen T. Stewart
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document