Using Seme Based Graph to Estimate Chinese Lexical Semantic Relatedness

Author(s):  
Yitao Shen ◽  
Junzhong Gu ◽  
Lijuan Diao
2007 ◽  
Vol 19 (8) ◽  
pp. 1259-1274 ◽  
Author(s):  
Dietmar Roehm ◽  
Ina Bornkessel-Schlesewsky ◽  
Frank Rösler ◽  
Matthias Schlesewsky

We report a series of event-related potential experiments designed to dissociate the functionally distinct processes involved in the comprehension of highly restricted lexical-semantic relations (antonyms). We sought to differentiate between influences of semantic relatedness (which are independent of the experimental setting) and processes related to predictability (which differ as a function of the experimental environment). To this end, we conducted three ERP studies contrasting the processing of antonym relations (black-white) with that of related (black-yellow) and unrelated (black-nice) word pairs. Whereas the lexical-semantic manipulation was kept constant across experiments, the experimental environment and the task demands varied: Experiment 1 presented the word pairs in a sentence context of the form The opposite of X is Y and used a sensicality judgment. Experiment 2 used a word pair presentation mode and a lexical decision task. Experiment 3 also examined word pairs, but with an antonymy judgment task. All three experiments revealed a graded N400 response (unrelated > related > antonyms), thus supporting the assumption that semantic associations are processed automatically. In addition, the experiments revealed that, in highly constrained task environments, the N400 gradation occurs simultaneously with a P300 effect for the antonym condition, thus leading to the superficial impression of an extremely “reduced” N400 for antonym pairs. Comparisons across experiments and participant groups revealed that the P300 effect is not only a function of stimulus constraints (i.e., sentence context) and experimental task, but that it is also crucially influenced by individual processing strategies used to achieve successful task performance.


2012 ◽  
Vol 19 (4) ◽  
pp. 411-479 ◽  
Author(s):  
ZIQI ZHANG ◽  
ANNA LISA GENTILE ◽  
FABIO CIRAVEGNA

AbstractMeasuring lexical semantic relatedness is an important task in Natural Language Processing (NLP). It is often a prerequisite to many complex NLP tasks. Despite an extensive amount of work dedicated to this area of research, there is a lack of an up-to-date survey in the field. This paper aims to address this issue with a study that is focused on four perspectives: (i) a comparative analysis of background information resources that are essential for measuring lexical semantic relatedness; (ii) a review of the literature with a focus on recent methods that are not covered in previous surveys; (iii) discussion of the studies in the biomedical domain where novel methods have been introduced but inadequately communicated across the domain boundaries; and (iv) an evaluation of lexical semantic relatedness methods and a discussion of useful lessons for the development and application of such methods. In addition, we discuss a number of issues in this field and suggest future research directions. It is believed that this work will be a valuable reference to researchers of lexical semantic relatedness and substantially support the research activities in this field.


2006 ◽  
Vol 32 (1) ◽  
pp. 13-47 ◽  
Author(s):  
Alexander Budanitsky ◽  
Graeme Hirst

The quantification of lexical semantic relatedness has many applications in NLP, and many different measures have been proposed. We evaluate five of these measures, all of which use WordNet as their central resource, by comparing their performance in detecting and correcting real-word spelling errors. An information-content-based measure proposed by Jiang and Conrath is found superior to those proposed by Hirst and St-Onge, Leacock and Chodorow, Lin, and Resnik. In addition, we explain why distributional similarity is not an adequate proxy for lexical semantic relatedness.


2008 ◽  
Vol 02 (02) ◽  
pp. 253-272
Author(s):  
CHRISTOF MÜLLER ◽  
IRYNA GUREVYCH ◽  
MAX MÜHLHÄUSER

This paper studies the integration of lexical semantic knowledge in two related semantic computing tasks: ad-hoc information retrieval and computing text similarity. For this purpose, we compare the performance of two algorithms: (i) using semantic relatedness, and (ii) using a conventional extended Boolean model [13] with additional query expansion. For the evaluation, we use two different test collections in the German language especially suitable to study the vocabulary gap problem: (i) GIRT [5] for the information retrieval task, and (ii) a collection of descriptions of professions built to evaluate a system for electronic career guidance in the information retrieval and text similarity tasks. We found that integrating lexical semantic knowledge increases the performance for both tasks. On the GIRT corpus, the performance is improved only for short queries. The performance on the collection of professional descriptions is improved, but crucially depends on the accurate preprocessing of the natural language essays employed as topics.


2000 ◽  
Author(s):  
Jerwen Jou ◽  
James W. Aldridge ◽  
Mark H. Winkel ◽  
Ravishankar Vedantam ◽  
Lorena L. Gonzalez

Sign in / Sign up

Export Citation Format

Share Document