Pseudo-siamese networks with lexicon for Chinese short text matching

2021 ◽  
pp. 1-13
Author(s):  
Jiawen Shi ◽  
Hong Li ◽  
Chiyu Wang ◽  
Zhicheng Pang ◽  
Jiale Zhou

Short text matching is one of the fundamental technologies in natural language processing. In previous studies, most of the text matching networks are initially designed for English text. The common approach to applying them to Chinese is segmenting each sentence into words, and then taking these words as input. However, this method often results in word segmentation errors. Chinese short text matching faces the challenges of constructing effective features and understanding the semantic relationship between two sentences. In this work, we propose a novel lexicon-based pseudo-siamese model (CL2 N), which can fully mine the information expressed in Chinese text. Instead of utilizing a character-sequence or a single word-sequence, CL2 N augments the text representation with multi-granularity information in characters and lexicons. Additionally, it integrates sentence-level features through single-sentence features as well as interactive features. Experimental studies on two Chinese text matching datasets show that our model has better performance than the state-of-the-art short text matching models, and the proposed method can solve the error propagation problem of Chinese word segmentation. Particularly, the incorporation of single-sentence features and interactive features allows the network to capture the contextual semantics and co-attentive lexical information, which contributes to our best result.

GEOMATICA ◽  
2020 ◽  
Author(s):  
Qinjun Qiu ◽  
Zhong Xie ◽  
Liang Wu

Unlike English and other western languages, Chinese does not delimit words using white-spaces. Chinese Word Segmentation (CWS) is the crucial first step towards natural language processing. However, for the geoscience subject domain, the CWS problem remains unresolved with many challenges. Although traditional methods can be used to process geoscience documents, they lack the domain knowledge for massive geoscience documents. Considering the above challenges, this motivated us to build a segmenter specifically for the geoscience domain. Currently, most of the state-of-the-art methods for Chinese word segmentation are based on supervised learning, whose features are mostly extracted from a local context. In this paper, we proposed a framework for sequence learning by incorporating cyclic self-learning corpus training. Following this framework, we build the GeoSegmenter based on the Bi-directional Long Short-Term Memory (Bi-LSTM) network model to perform Chinese word segmentation. It can gain a great advantage through iterations of the training data. Empirical experimental results on geoscience documents and benchmark datasets showed that geological documents can be identified, and it can also recognize the generic documents.


2013 ◽  
Vol 427-429 ◽  
pp. 2568-2571
Author(s):  
Shu Xian Liu ◽  
Xiao Hua Li

This article provides a brief introduction to Natural Language Processing and basic knowledge of Chinese Word Segmentation at first. Chinese Word Segmentation is a process of turning a series of Chinese characters into a series of Chinese words with some rules. As the fundamental component of Chinese information processing, it is wildly used in correlative areas. Accordingly, research on Chinese Word Segmentation has important theoretic and realistic meaning. In this paper, we mainly introduces the challenge in Chinese Word Segmentation, and presented the categories of Chinese Word Segmentation method.


2005 ◽  
Vol 31 (4) ◽  
pp. 531-574 ◽  
Author(s):  
Jianfeng Gao ◽  
Mu Li ◽  
Chang-Ning Huang ◽  
Andi Wu

This article presents a pragmatic approach to Chinese word segmentation. It differs from most previous approaches mainly in three respects. First, while theoretical linguists have defined Chinese words using various linguistic criteria, Chinese words in this study are defined pragmatically as segmentation units whose definition depends on how they are used and processed in realistic computer applications. Second, we propose a pragmatic mathematical framework in which segmenting known words and detecting unknown words of different types (i.e., morphologically derived words, factoids, named entities, and other unlisted words) can be performed simultaneously in a unified way. These tasks are usually conducted separately in other systems. Finally, we do not assume the existence of a universal word segmentation standard that is application-independent. Instead, we argue for the necessity of multiple segmentation standards due to the pragmatic fact that different natural language processing applications might require different granularities of Chinese words. These pragmatic approaches have been implemented in an adaptive Chinese word segmenter, called MSRSeg, which will be described in detail. It consists of two components: (1) a generic segmenter that is based on the framework of linear mixture models and provides a unified approach to the five fundamental features of word-level Chinese language processing: lexicon word processing, morphological analysis, factoid detection, named entity recognition, and new word identification; and (2) a set of output adaptors for adapting the output of (1) to different application-specific standards. Evaluation on five test sets with different standards shows that the adaptive system achieves state-of-the-art performance on all the test sets.


Mathematics ◽  
2021 ◽  
Vol 9 (10) ◽  
pp. 1129
Author(s):  
Shihong Chen ◽  
Tianjiao Xu

QA matching is a very important task in natural language processing, but current research on text matching focuses more on short text matching rather than long text matching. Compared with short text matching, long text matching is rich in information, but distracting information is frequent. This paper extracted question-and-answer pairs about psychological counseling to research long text QA-matching technology based on deep learning. We adjusted DSSM (Deep Structured Semantic Model) to make it suitable for the QA-matching task. Moreover, for better extraction of long text features, we also improved DSSM by enriching the text representation layer, using a bidirectional neural network and attention mechanism. The experimental results show that BiGRU–Dattention–DSSM performs better at matching questions and answers.


Author(s):  
Xinchi Chen ◽  
Xipeng Qiu ◽  
Xuanjing Huang

Recently, neural network models for natural language processing tasks have been increasingly focused on for their ability of alleviating the burden of manual feature engineering. However, the previous neural models cannot extract the complicated feature compositions as the traditional methods with discrete features. In this work, we propose a feature-enriched neural model for joint Chinese word segmentation and part-of-speech tagging task. Specifically, to simulate the feature templates of traditional discrete feature based models, we use different filters to model the complex compositional features with convolutional and pooling layer, and then utilize long distance dependency information with recurrent layer. Experimental results on five different datasets show the effectiveness of our proposed model.


Author(s):  
Hang Yan ◽  
Xipeng Qiu ◽  
Xuanjing Huang

Chinese word segmentation and dependency parsing are two fundamental tasks for Chinese natural language processing. The dependency parsing is defined at the word-level. Therefore word segmentation is the precondition of dependency parsing, which makes dependency parsing suffer from error propagation and unable to directly make use of character-level pre-trained language models (such as BERT). In this paper, we propose a graph-based model to integrate Chinese word segmentation and dependency parsing. Different from previous transition-based joint models, our proposed model is more concise, which results in fewer efforts of feature engineering. Our graph-based joint model achieves better performance than previous joint models and state-of-the-art results in both Chinese word segmentation and dependency parsing. Additionally, when BERT is combined, our model can substantially reduce the performance gap of dependency parsing between joint models and gold-segmented word-based models. Our code is publicly available at https://github.com/fastnlp/JointCwsParser


GEOMATICA ◽  
2018 ◽  
Vol 72 (1) ◽  
pp. 16-26 ◽  
Author(s):  
Qinjun Qiu ◽  
Zhong Xie ◽  
Liang Wu

Unlike English and other western languages, Chinese does not delimit words using white-spaces. Chinese Word Segmentation (CWS) is the crucial first step towards natural language processing. However, for the geoscience subject domain, the CWS problem remains unresolved with many challenges. Although traditional methods can be used to process geoscience documents, they lack the domain knowledge for massive geoscience documents. Considering the above challenges, this motivated us to build a segmenter specifically for the geoscience domain. Currently, most of the state-of-the-art methods for Chinese word segmentation are based on supervised learning, whose features are mostly extracted from a local context. In this paper, we proposed a framework for sequence learning by incorporating cyclic self-learning corpus training. Following this framework, we build the GeoSegmenter based on the Bi-directional Long Short-Term Memory (Bi-LSTM) network model to perform Chinese word segmentation. It can gain a great advantage through iterations of the training data. Empirical experimental results on geoscience documents and benchmark datasets showed that geological documents can be identified, and it can also recognize the generic documents.


2014 ◽  
Vol 556-562 ◽  
pp. 4376-4379
Author(s):  
Kun Zhi Gui ◽  
Yong Ren ◽  
Zhao Meng Peng

Chinese word segmentation is a fundamental problem in natural language processing. CRFs (Conditional Random Fields, CRFs) is an undirected graph model. It can work well with a variety of features, full use of the text information. Thus, this article adopts CRFs based Chinese word segmentation. This paper first gives the definition of CRFs model, the model parameter learning methods and reasoning algorithms. Then, it introduces the word tagging system which is widely used in Chinese word segmentation. The Bakeoff 2005 corpora are used in Chinese word segmentation experiments, and we achieve an excellent result on both MSRA and PKU corpora. The F-Measures on both corpora are 0.964 and 0.943, while the ROOV Values are 0.705 and 0.765.


2011 ◽  
Vol 58-60 ◽  
pp. 1106-1112
Author(s):  
Xin Liu ◽  
Ren Ren Liu ◽  
Wen Jing He

To solving Chinese text categorization, a fast algorithm is proposed. The basic idea of the algorithm is: first constructs a weighted value of keywords dictionary which is constructed in key tree, then using the Hash function and the principle of giving priority for long term matching to mapping the strings in documentations to the dictionary. After that, calculate the sum of weights of the keywords which has been matched successfully. Finally take the maximum for the result of the classification. The algorithm can avoid the difficulty of Chinese word segmentation and its influence on accuracy of result. Theoretical analysis and experimental results indicate that the accuracy and the time efficiency of the algorithm is higher, whose comprehensive performance reaches to the level of current major technology.


2017 ◽  
Vol 21 (1) ◽  
pp. 61-65
Author(s):  
Parshu Ram Dhungyel ◽  
Jānis Grundspeņķis

Abstract In both Chinese and Dzongkha languages, the greatest challenge is to identify the word boundaries because there are no word delimiters as it is in English and other Western languages. Therefore, preprocessing and word segmentation is the first step in Dzongkha language processing, such as translation, spell-checking, and information retrieval. Research on Chinese word segmentation was conducted long time ago. Therefore, it is relatively mature, but the Dzongkha word segmentation has been less studied by researchers. In the paper, we have investigated this major problem in Dzongkha language processing using a probabilistic approach for selecting valid segments with probability being computed on the basis of the corpus.


Sign in / Sign up

Export Citation Format

Share Document