scholarly journals Unsupervised Predominant Sense Detection and Its Application to Text Classification

2020 ◽  
Vol 10 (17) ◽  
pp. 6052
Author(s):  
Attaporn Wangpoonsarp ◽  
Kazuya Shimura ◽  
Fumiyo Fukumoto

This paper focuses on the domain-specific senses of words and proposes a method for detecting predominant sense depending on each domain. Our Domain-Specific Senses (DSS) model is an unsupervised manner and detects predominant senses in each domain. We apply a simple Markov Random Walk (MRW) model to ranking senses for each domain. It decides the importance of a sense within a graph by using the similarity of senses. The similarity of senses is obtained by using distributional representations of words from gloss texts in the thesaurus. It can capture large semantic context and thus does not require manual annotation of sense-tagged data. We used the Reuters corpus and the WordNet in the experiments. We applied the results of domain-specific senses to text classification and examined how DSS affects the overall performance of the text classification task. We compared our DSS model with one of the word sense disambiguation techniques (WSD), Context2vec, and the results demonstrate our domain-specific sense approach gains 0.053 F1 improvement on average over the WSD approach.

Author(s):  
Pushpak Bhattacharyya ◽  
Mitesh Khapra

This chapter discusses the basic concepts of Word Sense Disambiguation (WSD) and the approaches to solving this problem. Both general purpose WSD and domain specific WSD are presented. The first part of the discussion focuses on existing approaches for WSD, including knowledge-based, supervised, semi-supervised, unsupervised, hybrid, and bilingual approaches. The accuracy value for general purpose WSD as the current state of affairs seems to be pegged at around 65%. This has motivated investigations into domain specific WSD, which is the current trend in the field. In the latter part of the chapter, we present a greedy neural network inspired algorithm for domain specific WSD and compare its performance with other state-of-the-art algorithms for WSD. Our experiments suggest that for domain-specific WSD, simply selecting the most frequent sense of a word does as well as any state-of-the-art algorithm.


2015 ◽  
Vol 13 (3) ◽  
pp. 784-789
Author(s):  
Franco Rojas Lopez ◽  
Ivan Lopez Arevalo ◽  
David Pinto ◽  
Victor J. Sosa Sosa

2016 ◽  
Vol 8 (1) ◽  
pp. 165-172
Author(s):  
Eniafe Festus Ayetiran ◽  
Kehinde Agbele

Abstract Computational complexity is a characteristic of almost all Lesk-based algorithms for word sense disambiguation (WSD). In this paper, we address this issue by developing a simple and optimized variant of the algorithm using topic composition in documents based on the theory underlying topic models. The knowledge resource adopted is the English WordNet enriched with linguistic knowledge from Wikipedia and Semcor corpus. Besides the algorithm’s eficiency, we also evaluate its efectiveness using two datasets; a general domain dataset and domain-specific dataset. The algorithm achieves a superior performance on the general domain dataset and superior performance for knowledge-based techniques on the domain-specific dataset.


2007 ◽  
Vol 33 (4) ◽  
pp. 553-590 ◽  
Author(s):  
Diana McCarthy ◽  
Rob Koeling ◽  
Julie Weeds ◽  
John Carroll

There has been a great deal of recent research into word sense disambiguation, particularly since the inception of the Senseval evaluation exercises. Because a word often has more than one meaning, resolving word sense ambiguity could benefit applications that need some level of semantic interpretation of language input. A major problem is that the accuracy of word sense disambiguation systems is strongly dependent on the quantity of manually sense-tagged data available, and even the best systems, when tagging every word token in a document, perform little better than a simple heuristic that guesses the first, or predominant, sense of a word in all contexts. The success of this heuristic is due to the skewed nature of word sense distributions. Data for the heuristic can come from either dictionaries or a sample of sense-tagged data. However, there is a limited supply of the latter, and the sense distributions and predominant sense of a word can depend on the domain or source of a document. (The first sense of “star” for example would be different in the popular press and scientific journals). In this article, we expand on a previously proposed method for determining the predominant sense of a word automatically from raw text. We look at a number of different data sources and parameterizations of the method, using evaluation results and error analyses to identify where the method performs well and also where it does not. In particular, we find that the method does not work as well for verbs and adverbs as nouns and adjectives, but produces more accurate predominant sense information than the widely used SemCor corpus for nouns with low coverage in that corpus. We further show that the method is able to adapt successfully to domains when using domain specific corpora as input and where the input can either be hand-labeled for domain or automatically classified.


2017 ◽  
Vol 41 ◽  
pp. 128-145 ◽  
Author(s):  
Ivan Lopez-Arevalo ◽  
Victor J. Sosa-Sosa ◽  
Franco Rojas-Lopez ◽  
Edgar Tello-Leal

2015 ◽  
Vol 49 (2) ◽  
pp. 118-134 ◽  
Author(s):  
Andreas Vlachidis ◽  
Douglas Tudhope

Purpose – The purpose of this paper is to present the role and contribution of natural language processing techniques, in particular negation detection and word sense disambiguation in the process of Semantic Annotation of Archaeological Grey Literature. Archaeological reports contain a great deal of information that conveys facts and findings in different ways. This kind of information is highly relevant to the research and analysis of archaeological evidence but at the same time can be a hindrance for the accurate indexing of documents with respect to positive assertions. Design/methodology/approach – The paper presents a method for adapting the biomedicine oriented negation algorithm NegEx to the context of archaeology and discusses the evaluation results of the new modified negation detection module. A particular form of polysemy, which is inflicted by the definition of ontology classes and concerning the semantics of small finds in archaeology, is addressed by a domain specific word-sense disambiguation module. Findings – The performance of the negation dection module is compared against a “Gold Standard” that consists of 300 manually annotated pages of archaeological excavation and evaluation reports. The evaluation results are encouraging, delivering overall 89 per cent precision, 80 per cent recall and 83 per cent F-measure scores. The paper addresses limitations and future improvements of the current work and highlights the need for ontological modelling to accommodate negative assertions. Originality/value – The discussed NLP modules contribute to the aims of the OPTIMA pipeline delivering an innovative application of such methods in the context of archaeological reports for the semantic annotation of archaeological grey literature with respect to the CIDOC-CRM ontology.


2003 ◽  
Vol 29 (3) ◽  
pp. 485-502 ◽  
Author(s):  
Celina Santamar ◽  
Julio Gonzalo ◽  
Felisa Verdejo

We describe an algorithm that combines lexical information (from WordNet 1.7) with Web directories (from the Open Directory Project) to associate word senses with such directories. Such associations can be used as rich characterizations to acquire sense-tagged corpora automatically, cluster topically related senses, and detect sense specializations. The algorithm is evaluated for the 29 nouns (147 senses) used in the Senseval 2 competition, obtaining 148 (word sense, Web directory) associations covering 88% of the domain-specific word senses in the test data with 86% accuracy. The richness of Web directories as sense characterizations is evaluated in a supervised word sense disambiguation task using the Senseval 2 test suite. The results indicate that, when the directory/word sense association is correct, the samples automatically acquired from the Web directories are nearly as valid for training as the original Senseval 2 training instances. The results support our hypothesis that Web directories are a rich source of lexical information: cleaner, more reliable, and more structured than the full Web as a corpus.


Abbreviations are common in biomedical documents, and many are ambiguous in the sense that they have several potential expansions. Identifying the correct expansion is necessary for language understanding and important for applications such as document retrieval. Identifying the correct expansion can be viewed as a Word Sense Disambiguation (WSD) problem. Previous approaches to resolving this problem have made use of various sources of information including linguistic features of the context in which the ambiguous term is used and domain-specific resources, such as UMLS. This chapter compares a range of knowledge sources, which have been previously used, and introduce a novel one: MeSH terms. The best performance is obtained using linguistic features in combination with MeSH terms.


Sign in / Sign up

Export Citation Format

Share Document