scholarly journals A REVIEW OF WORD SENSE DISAMBIGUATION METHOD

2021 ◽  
Vol 6 (22) ◽  
pp. 01-14
Author(s):  
Mohammad Hafizi Jainal ◽  
Saidah Saad ◽  
Rabiah Abdul Kadir

Background: Word Sense Disambiguation (WSD) is known to have a detrimental effect on the precision of information retrieval systems, where WSD is the ability to identify the meanings of words in context. There is a challenge in inference-correct-sensing on ambiguous words. Through many years of research, there have been various solutions to WSD that have been proposed; they have been divided into supervised and knowledge-based unsupervised. Objective: The first objective of this study was to explore the state-of-art of the WSD method with a hybrid method using ontology concepts. Then, with the findings, we may understand which tools are available to build WSD components. The second objective was to determine which method would be the best in giving good performance results of WSD, by analysing how the methods were used to answer specific WSD questions, their production, and how their performance was analysed. Methods: A review of the literature was conducted relating to the performance of WSD research, which used a comparison method of information retrieval analysis. The study compared the types of methods used in case, and examined methods for tools production, tools training, and analysis of performance. Results: In total 12 papers were found that satisfied all 3 inclusion criteria, and there was an anchor paper assigned to be referred. We chose the knowledge-based unsupervised approach because it has fewer word sets constraints than the supervised approaches which require training data. Concept-based ontology will help WSD in finding the semantic words concept with respect to another concept around it. Conclusion: Many methods was explored and compared to determine the most suitable way to build a WSD model based on semantics between words in query texts that can be related to the knowledge concept by using ontological knowledge presentation.

2019 ◽  
Vol 26 (5) ◽  
pp. 438-446 ◽  
Author(s):  
Ahmad Pesaranghader ◽  
Stan Matwin ◽  
Marina Sokolova ◽  
Ali Pesaranghader

Abstract Objective In biomedicine, there is a wealth of information hidden in unstructured narratives such as research articles and clinical reports. To exploit these data properly, a word sense disambiguation (WSD) algorithm prevents downstream difficulties in the natural language processing applications pipeline. Supervised WSD algorithms largely outperform un- or semisupervised and knowledge-based methods; however, they train 1 separate classifier for each ambiguous term, necessitating a large number of expert-labeled training data, an unattainable goal in medical informatics. To alleviate this need, a single model that shares statistical strength across all instances and scales well with the vocabulary size is desirable. Materials and Methods Built on recent advances in deep learning, our deepBioWSD model leverages 1 single bidirectional long short-term memory network that makes sense prediction for any ambiguous term. In the model, first, the Unified Medical Language System sense embeddings will be computed using their text definitions; and then, after initializing the network with these embeddings, it will be trained on all (available) training data collectively. This method also considers a novel technique for automatic collection of training data from PubMed to (pre)train the network in an unsupervised manner. Results We use the MSH WSD dataset to compare WSD algorithms, with macro and micro accuracies employed as evaluation metrics. deepBioWSD outperforms existing models in biomedical text WSD by achieving the state-of-the-art performance of 96.82% for macro accuracy. Conclusions Apart from the disambiguation improvement and unsupervised training, deepBioWSD depends on considerably less number of expert-labeled data as it learns the target and the context terms jointly. These merit deepBioWSD to be conveniently deployable in real-time biomedical applications.


2016 ◽  
Vol 4 ◽  
pp. 197-213 ◽  
Author(s):  
Silvana Hartmann ◽  
Judith Eckle-Kohler ◽  
Iryna Gurevych

We present a new approach for generating role-labeled training data using Linked Lexical Resources, i.e., integrated lexical resources that combine several resources (e.g., Word-Net, FrameNet, Wiktionary) by linking them on the sense or on the role level. Unlike resource-based supervision in relation extraction, we focus on complex linguistic annotations, more specifically FrameNet senses and roles. The automatically labeled training data ( www.ukp.tu-darmstadt.de/knowledge-based-srl/ ) are evaluated on four corpora from different domains for the tasks of word sense disambiguation and semantic role classification. Results show that classifiers trained on our generated data equal those resulting from a standard supervised setting.


This paper discuss various technique of word sense disambiguation. In WSD we disambiguate the correct sense of target word present in the text. WSD is a challenging field in the natural language processing, it helps in information retrieval, information extraction, machine learning. There are two approaches for WSD machine learning approach and knowledge based approach. In Knowledge based approach a external resource is used to help in disambiguation process, but in Machine learning approach a corpus is used whether it is annotated, un-annotated or both


2019 ◽  
Vol 55 (2) ◽  
pp. 339-365
Author(s):  
Arkadiusz Janz ◽  
Maciej Piasecki

Abstract Automatic word sense disambiguation (WSD) has proven to be an important technique in many natural language processing tasks. For many years the problem of sense disambiguation has been approached with a wide range of methods, however, it is still a challenging problem, especially in the unsupervised setting. One of the well-known and successful approaches to WSD are knowledge-based methods leveraging lexical knowledge resources such as wordnets. As the knowledge-based approaches mostly do not use any labelled training data their performance strongly relies on the structure and the quality of used knowledge sources. However, a pure knowledge-base such as a wordnet cannot reflect all the semantic knowledge necessary to correctly disambiguate word senses in text. In this paper we explore various expansions to plWordNet as knowledge-bases for WSD. Semantic links extracted from a large valency lexicon (Walenty), glosses and usage examples, Wikipedia articles and SUMO ontology are combined with plWordNet and tested in a PageRank-based WSD algorithm. In addition, we analyse also the influence of lexical semantics vector models extracted with the help of the distributional semantics methods. Several new Polish test data sets for WSD are also introduced. All the resources, methods and tools are available on open licences.


Author(s):  
Pushpak Bhattacharyya ◽  
Mitesh Khapra

This chapter discusses the basic concepts of Word Sense Disambiguation (WSD) and the approaches to solving this problem. Both general purpose WSD and domain specific WSD are presented. The first part of the discussion focuses on existing approaches for WSD, including knowledge-based, supervised, semi-supervised, unsupervised, hybrid, and bilingual approaches. The accuracy value for general purpose WSD as the current state of affairs seems to be pegged at around 65%. This has motivated investigations into domain specific WSD, which is the current trend in the field. In the latter part of the chapter, we present a greedy neural network inspired algorithm for domain specific WSD and compare its performance with other state-of-the-art algorithms for WSD. Our experiments suggest that for domain-specific WSD, simply selecting the most frequent sense of a word does as well as any state-of-the-art algorithm.


Sign in / Sign up

Export Citation Format

Share Document