FSPRM: A Feature Subsequence Based Probability Representation Model for Chinese Word Embedding

Author(s):  
Yun Zhang ◽  
Yongguo Liu ◽  
Jiajing Zhu ◽  
Xindong Wu
Author(s):  
Xingyuan Chen ◽  
Peng Jin ◽  
Diana McCarthy ◽  
John Carroll
Keyword(s):  

Author(s):  
Qinjuan Yang ◽  
Haoran Xie ◽  
Gary Cheng ◽  
Fu Lee Wang ◽  
Yanghui Rao

AbstractChinese word embeddings have recently garnered considerable attention. Chinese characters and their sub-character components, which contain rich semantic information, are incorporated to learn Chinese word embeddings. Chinese characters can represent a combination of meaning, structure, and pronunciation. However, existing embedding learning methods focus on the structure and meaning of Chinese characters. In this study, we aim to develop an embedding learning method that can make complete use of the information represented by Chinese characters, including phonology, morphology, and semantics. Specifically, we propose a pronunciation-enhanced Chinese word embedding learning method, where the pronunciations of context characters and target characters are simultaneously encoded into the embeddings. Evaluation of word similarity, word analogy reasoning, text classification, and sentiment analysis validate the effectiveness of our proposed method.


Sign in / Sign up

Export Citation Format

Share Document