Incorporating Lexical Semantic Similarity to Tree Kernel-Based Chinese Relation Extraction

Author(s):  
Dandan Liu ◽  
Zhiwei Zhao ◽  
Yanan Hu ◽  
Longhua Qian
Author(s):  
Martin Maiden

The historical morphology of the verb ‘snow’ in Francoprovençal presents a conundrum, in that it is clearly analogically influenced by the verb ‘rain’, for obvious reasons of lexical semantic similarity, but the locus of that influence is not the ‘root’ (the ostensible bearer of lexical meaning) but desinential inflexion-class members, which are in principle independent of any lexical meaning. Similar morphological changes are also identified for other Gallo-Romance verbs. It seems, in effect, that speakers can identify exponents of the lexical meaning of word-forms in linear sequences larger than the apparent ‘morphemic’ composition of those word-forms, even when such a composition may seem prima facie transparent and obvious. It is argued that these facts are inherently incompatible with ‘constructivist’, morpheme-based, models of morphology, and strongly compatible with what have been called ‘abstractivist’ (‘word-and-paradigm’) approaches, which generally take entire word-forms as the primary units of morphological analysis.


Author(s):  
Haiyun Jiang ◽  
Li Cui ◽  
Zhe Xu ◽  
Deqing Yang ◽  
Jindong Chen ◽  
...  

Explicitly exploring the semantics of a relation is significant for high-accuracy relation extraction, which is, however, not fully studied in previous work. In this paper, we mine the topic knowledge of a relation to explicitly represent the semantics of this relation, and model relation extraction as a matching problem. That is, the matching score between a sentence and a candidate relation is predicted for an entity pair. To this end, we propose a deep matching network to precisely model the semantic similarity between a sentence-relation pair. Besides, the topic knowledge also allows us to derive the importance information of samples as well as two knowledge-guided negative sampling strategies in the training process. We conduct extensive experiments to evaluate the proposed framework and observe improvements in AUC of 11.5% and max F1 of 5.4% over the baselines with state-of-the-art performance.


Author(s):  
Muhammad Asif Ali ◽  
Yifang Sun ◽  
Xiaoling Zhou ◽  
Wei Wang ◽  
Xiang Zhao

Distinguishing antonyms from synonyms is a key challenge for many NLP applications focused on the lexical-semantic relation extraction. Existing solutions relying on large-scale corpora yield low performance because of huge contextual overlap of antonym and synonym pairs. We propose a novel approach entirely based on pre-trained embeddings. We hypothesize that the pre-trained embeddings comprehend a blend of lexical-semantic information and we may distill the task-specific information using Distiller, a model proposed in this paper. Later, a classifier is trained based on features constructed from the distilled sub-spaces along with some word level features to distinguish antonyms from synonyms. Experimental results show that the proposed model outperforms existing research on antonym synonym distinction in both speed and performance.


Sign in / Sign up

Export Citation Format

Share Document