scholarly journals Spreading semantic information by Word Sense Disambiguation

2017 ◽  
Vol 132 ◽  
pp. 47-61 ◽  
Author(s):  
Yoan Gutiérrez ◽  
Sonia Vázquez ◽  
Andrés Montoyo
2017 ◽  
Vol 43 (3) ◽  
pp. 593-617 ◽  
Author(s):  
Sascha Rothe ◽  
Hinrich Schütze

We present AutoExtend, a system that combines word embeddings with semantic resources by learning embeddings for non-word objects like synsets and entities and learning word embeddings that incorporate the semantic information from the resource. The method is based on encoding and decoding the word embeddings and is flexible in that it can take any word embeddings as input and does not need an additional training corpus. The obtained embeddings live in the same vector space as the input word embeddings. A sparse tensor formalization guarantees efficiency and parallelizability. We use WordNet, GermaNet, and Freebase as semantic resources. AutoExtend achieves state-of-the-art performance on Word-in-Context Similarity and Word Sense Disambiguation tasks.


Author(s):  
Roberto Navigli

This chapter is about ontologies: that is, knowledge models of a domain of interest. We introduce ontologies, its building blocks and sections, view them from the perspective of several fields of knowledge (computer science, philosophy, software engineering, etc.), and present existing ontologies and the different tasks of ontology building, learning, matching, mapping, and merging. We also review interfaces for building ontologies and the knowledge representation languages used to implement them. Finally, we discuss the different ways of evaluating an ontology and the applications in which it can be used, including word sense disambiguation, reasoning, question answering, semantic information retrieval. and machine translation.


2018 ◽  
Vol 18 (1) ◽  
pp. 139-151 ◽  
Author(s):  
Alexander Popov

Abstract The following article presents an overview of the use of artificial neural networks for the task of Word Sense Disambiguation (WSD). More specifically, it surveys the advances in neural language models in recent years that have resulted in methods for the effective distributed representation of linguistic units. Such representations – word embeddings, context embeddings, sense embeddings – can be effectively applied for WSD purposes, as they encode rich semantic information, especially in conjunction with recurrent neural networks, which are able to capture long-distance relations encoded in word order, syntax, information structuring.


2022 ◽  
Vol 2022 ◽  
pp. 1-14
Author(s):  
Chun-Xiang Zhang ◽  
Shu-Yang Pang ◽  
Xue-Yao Gao ◽  
Jia-Qi Lu ◽  
Bo Yu

In order to improve the disambiguation accuracy of biomedical words, this paper proposes a disambiguation method based on the attention neural network. The biomedical word is viewed as the center. Morphology, part of speech, and semantic information from 4 adjacent lexical units are extracted as disambiguation features. The attention layer is used to generate a feature matrix. Average asymmetric convolutional neural networks (Av-ACNN) and bidirectional long short-term memory (Bi-LSTM) networks are utilized to extract features. The softmax function is applied to determine the semantic category of the biomedical word. At the same time, CNN, LSTM, and Bi-LSTM are applied to biomedical WSD. MSH corpus is adopted to optimize CNN, LSTM, Bi-LSTM, and the proposed method and testify their disambiguation performance. Experimental results show that the average disambiguation accuracy of the proposed method is improved compared with CNN, LSTM, and Bi-LSTM. The average disambiguation accuracy of the proposed method achieves 91.38%.


Author(s):  
Tommaso Pasini

Word Sense Disambiguation (WSD) is the task of identifying the meaning of a word in a given context. It lies at the base of Natural Language Processing as it provides semantic information for words. In the last decade, great strides have been made in this field and much effort has been devoted to mitigate the knowledge acquisition bottleneck problem, i.e., the problem of semantically annotating texts at a large scale and in different languages. This issue is ubiquitous in WSD as it hinders the creation of both multilingual knowledge bases and manually-curated training sets. In this work, we first introduce the reader to the task of WSD through a short historical digression and then take the stock of the advancements to alleviate the knowledge acquisition bottleneck problem. In that, we survey the literature on manual, semi-automatic and automatic approaches to create English and multilingual corpora tagged with sense annotations and present a clear overview over supervised models for WSD. Finally, we provide our view over the future directions that we foresee for the field.


2012 ◽  
Vol 23 (4) ◽  
pp. 776-785 ◽  
Author(s):  
Zhi-Zhuo YANG ◽  
He-Yan HUANG

Sign in / Sign up

Export Citation Format

Share Document