scholarly journals Estimation of Power Spectral Density in SVPWM based Induction Motor Drives

This paper is readied the product programming of the SVPWM and half of breed PWM basically based DTC of recognition engine manipulate for assessing the strength Spectral Density (PSD) and the overall consonant mutilation (THD) of the road flows. The PWM set of guidelines utilizes three beautiful PWM methodologies like traditional SVPWM, AZPWM3 and combination PWM for the evaluation of the vitality spectra and consonant spectra. In quality spectra appraisal the extents of the power accrued at express frequencies and inside the consonant spectra the problem band sizes at one among a type replacing frequencies are taken into consideration for the assessment. To confirm the PWM calculations, numerical activity is performed making use of MATLAB/simulink Telugu (తెలుగు) is one of the Dravidian languages which is morphologically rich. As in the other languages it too contains polysemous words which have different meanings in different contexts. There are several language models exist to solve the word sense disambiguation problem with respect to each language like English, Chinese, Hindi and Kannada etc. The proposed method gives a solution for the word sense disambiguation problem with the help of ngram technique which has given good results in many other languages. The methodology mentioned in this paper finds the co-occurrence words of target polysemous word and we call them as n-grams. A Telugu corpus sent as input for training phase to find n-gram joint probabilities. By considering these joint probabilities the target polysemous word will be assigned a correct sense in testing phase. We evaluate the proposed method on some polysemous Telugu nouns and verbs. The methodology proposed gives the F-measure 0.94 when tested on Telugu corpus collected from CIIL, various news papers and story books.The present methodology can give better results with increase in size of training corpus and in future we plan to evaluate it on all words not only nouns and verbs

2012 ◽  
Vol 41 (2) ◽  
pp. 241-260 ◽  
Author(s):  
Daniel Preotiuc-Pietro ◽  
Florentina Hristea

2021 ◽  
pp. 1-55
Author(s):  
Daniel Loureiro ◽  
Kiamehr Rezaee ◽  
Mohammad Taher Pilehvar ◽  
Jose Camacho-Collados

Abstract Transformer-based language models have taken many fields in NLP by storm. BERT and its derivatives dominate most of the existing evaluation benchmarks, including those for Word Sense Disambiguation (WSD), thanks to their ability in capturing context-sensitive semantic nuances. However, there is still little knowledge about their capabilities and potential limitations in encoding and recovering word senses. In this article, we provide an in-depth quantitative and qualitative analysis of the celebrated BERT model with respect to lexical ambiguity. One of the main conclusions of our analysis is that BERT can accurately capture high-level sense distinctions, even when a limited number of examples is available for each word sense. Our analysis also reveals that in some cases language models come close to solving coarse-grained noun disambiguation under ideal conditions in terms of availability of training data and computing resources. However, this scenario rarely occurs in real-world settings and, hence, many practical challenges remain even in the coarse-grained setting. We also perform an in-depth comparison of the two main language model based WSD strategies, i.e., fine-tuning and feature extraction, finding that the latter approach is more robust with respect to sense bias and it can better exploit limited available training data. In fact, the simple feature extraction strategy of averaging contextualized embeddings proves robust even using only three training sentences per word sense, with minimal improvements obtained by increasing the size of this training data.


A word having multiple senses in a text introduces the lexical semantic task to find out which particular sense is appropriate for the given context. One such task is word sense disambiguation which refers to the identification of the most appropriate meaning of the polysemous word in a given context using computational algorithms. The language processing research in Hindi, the official language of India, and other Indian languages is constrained by non-availability of the standard corpora. For Hindi word sense disambiguation also, the large corpus is not available. In this work, we prepared the text containing new senses of certain words leading to the enrichment of the available sense-tagged Hindi corpus of sixty polysemous words. Furthermore, we analyzed two novel lexical associations for Hindi word sense disambiguation based on the contextual features of the polysemous word. The evaluation of these methods is carried out over learning algorithms and favourable results are achieved


2020 ◽  
Vol 34 (05) ◽  
pp. 8758-8765 ◽  
Author(s):  
Bianca Scarlini ◽  
Tommaso Pasini ◽  
Roberto Navigli

Contextual representations of words derived by neural language models have proven to effectively encode the subtle distinctions that might occur between different meanings of the same word. However, these representations are not tied to a semantic network, hence they leave the word meanings implicit and thereby neglect the information that can be derived from the knowledge base itself. In this paper, we propose SensEmBERT, a knowledge-based approach that brings together the expressive power of language modelling and the vast amount of knowledge contained in a semantic network to produce high-quality latent semantic representations of word meanings in multiple languages. Our vectors lie in a space comparable with that of contextualized word embeddings, thus allowing a word occurrence to be easily linked to its meaning by applying a simple nearest neighbour approach.We show that, whilst not relying on manual semantic annotations, SensEmBERT is able to either achieve or surpass state-of-the-art results attained by most of the supervised neural approaches on the English Word Sense Disambiguation task. When scaling to other languages, our representations prove to be equally effective as their English counterpart and outperform the existing state of the art on all the Word Sense Disambiguation multilingual datasets. The embeddings are released in five different languages at http://sensembert.org.


2018 ◽  
Vol 18 (1) ◽  
pp. 139-151 ◽  
Author(s):  
Alexander Popov

Abstract The following article presents an overview of the use of artificial neural networks for the task of Word Sense Disambiguation (WSD). More specifically, it surveys the advances in neural language models in recent years that have resulted in methods for the effective distributed representation of linguistic units. Such representations – word embeddings, context embeddings, sense embeddings – can be effectively applied for WSD purposes, as they encode rich semantic information, especially in conjunction with recurrent neural networks, which are able to capture long-distance relations encoded in word order, syntax, information structuring.


2013 ◽  
Vol 21 (2) ◽  
pp. 251-269 ◽  
Author(s):  
MASOUD NAROUEI ◽  
MANSOUR AHMADI ◽  
ASHKAN SAMI

AbstractAn open problem in natural language processing is word sense disambiguation (WSD). A word may have several meanings, but WSD is the task of selecting the correct sense of a polysemous word based on its context. Proposed solutions are based on supervised and unsupervised learning methods. The majority of researchers in the area focused on choosing proper size of ‘n’ in n-gram that is used for WSD problem. In this research, the concept has been taken to a new level by using variable ‘n’ and variable size window. The concept is based on the iterative patterns extracted from the text. We show that this type of sequential pattern is more effective than many other solutions for WSD. Using regular data mining algorithms on the extracted features, we significantly outperformed most monolingual WSD solutions. The state-of-the-art results were obtained using external knowledge like various translations of the same sentence. Our method improved the accuracy of the multilingual system more than 4 percent, although we were using monolingual features.


Sign in / Sign up

Export Citation Format

Share Document