A Wikipedia-based semantic tensor space model for text analytics

Author(s):  
Jae Young Chang ◽  
Han joon Kim
2016 ◽  
Vol 21 (4) ◽  
pp. 1-14
Author(s):  
Kee-Joo Hong ◽  
Han-Joon Kim ◽  
Jae-Young Chang ◽  
Jong-Hoon Chun

2014 ◽  
Vol 16 (1) ◽  
pp. 43-47
Author(s):  
Prashant Singh ◽  
◽  
P.K Mishra
Keyword(s):  

2021 ◽  
Vol 11 (20) ◽  
pp. 9703
Author(s):  
Han-joon Kim ◽  
Pureum Lim

Most text classification systems use machine learning algorithms; among these, naïve Bayes and support vector machine algorithms adapted to handle text data afford reasonable performance. Recently, given developments in deep learning technology, several scholars have used deep neural networks (recurrent and convolutional neural networks) to improve text classification. However, deep learning-based text classification has not greatly improved performance compared to that of conventional algorithms. This is because a textual document is essentially expressed as a vector (only), albeit with word dimensions, which compromises the inherent semantic information, even if the vector is (appropriately) transformed to add conceptual information. To solve this `loss of term senses’ problem, we develop a concept-driven deep neural network based upon our semantic tensor space model. The semantic tensor used for text representation features a dependency between the term and the concept; we use this to develop three deep neural networks for text classification. We perform experiments using three standard document corpora, and we show that our proposed methods are superior to both traditional and more recent learning methods.


Sign in / Sign up

Export Citation Format

Share Document