scholarly journals Advanced Tamil POS Tagger for Language Learners

In the emerging technology Natural Language Processing, machine translation is one of the important roles. The machine translation is translation of text in one language to another with the implementation of Machines. The research topic POS Tagging is one of the most basic and important work in Machine translation. POS tagging simply, we say that to assign the Parts of speech identification for each word in the given sentence. In my research work, I tried the POS Tagging for Tamil language. There may be some numerous research were done in the same topic. I have viewed this in different and very detailed implementation. Most of the detailed grammatical identifications are made for this proposed research. It is very useful to know the basic grammar in Tamil language

2019 ◽  
Vol 8 (2S11) ◽  
pp. 2468-2471

Sentiment Analysis is one of the leading research work. This paper proposes a model for the description of verbs that provide a structure for developing sentiment analysis. The verbs are very significant language elements and they receive the attention of linguistic researchers. The text is processed for parts-of-speech tagging (POS tagging). With the help of POS tagger, the verbs from each sentence are extracted to show the difference in sentiment analysis values. The work includes performing parts-of-speech tagging to obtain verb words and implement TextBlob and VADER to find the semantic orientation to mine the opinion from the movie review. We achieved interesting results, which were assessed effectively for accuracy by considering with and without verb form words. The findings show that concerning verb words accuracy increases along with emotion words. This introduces a new strategy to classify online reviews using components of algorithms for parts-of-speech..


Author(s):  
Sunita Warjri ◽  
Partha Pakray ◽  
Saralin A. Lyngdoh ◽  
Arnab Kumar Maji

Part-of-speech (POS) tagging is one of the research challenging fields in natural language processing (NLP). It requires good knowledge of a particular language with large amounts of data or corpora for feature engineering, which can lead to achieving a good performance of the tagger. Our main contribution in this research work is the designed Khasi POS corpus. Till date, there has been no form of any kind of Khasi corpus developed or formally developed. In the present designed Khasi POS corpus, each word is tagged manually using the designed tagset. Methods of deep learning have been used to experiment with our designed Khasi POS corpus. The POS tagger based on BiLSTM, combinations of BiLSTM with CRF, and character-based embedding with BiLSTM are presented. The main challenges of understanding and handling Natural Language toward Computational linguistics to encounter are anticipated. In the presently designed corpus, we have tried to solve the problems of ambiguities of words concerning their context usage, and also the orthography problems that arise in the designed POS corpus. The designed Khasi corpus size is around 96,100 tokens and consists of 6,616 distinct words. Initially, while running the first few sets of data of around 41,000 tokens in our experiment the taggers are found to yield considerably accurate results. When the Khasi corpus size has been increased to 96,100 tokens, we see an increase in accuracy rate and the analyses are more pertinent. As results, accuracy of 96.81% is achieved for the BiLSTM method, 96.98% for BiLSTM with CRF technique, and 95.86% for character-based with LSTM. Concerning substantial research from the NLP perspectives for Khasi, we also present some of the recently existing POS taggers and other NLP works on the Khasi language for comparative purposes.


2022 ◽  
Vol 14 (1) ◽  
pp. 0-0

POS (Parts of Speech) tagging, a vital step in diverse Natural Language Processing (NLP) tasks has not drawn much attention in case of Odia a computationally under-developed language. The proposed hybrid method suggests a robust POS tagger for Odia. Observing the rich morphology of the language and unavailability of sufficient annotated text corpus a combination of machine learning and linguistic rules is adopted in the building of the tagger. The tagger is trained on tagged text corpus from the domain of tourism and is capable of obtaining a perceptible improvement in the result. Also an appreciable performance is observed for news articles texts of varied domains. The performance of proposed algorithm experimenting on Odia language shows its manifestation in dominating over existing methods like rule based, hidden Markov model (HMM), maximum entropy (ME) and conditional random field (CRF).


2021 ◽  
pp. 587-595
Author(s):  
Alebachew Chiche ◽  
Hiwot Kadi ◽  
Tibebu Bekele

Natural language processing plays a great role in providing an interface for human-computer communication. It enables people to talk with the computer in their formal language rather than machine language. This study aims at presenting a Part of speech tagger that can assign word class to words in a given paragraph sentence. Some of the researchers developed parts of speech taggers for different languages such as English Amharic, Afan Oromo, Tigrigna, etc. On the other hand, many other languages do not have POS taggers like Shekki’noono language.  POS tagger is incorporated in most natural language processing tools like machine translation, information extraction as a basic component. So, it is compulsory to develop a part of speech tagger for languages then it is possible to work with an advanced natural language application. Because those applications enhance machine to machine, machine to human, and human to human communications. Although, one language POS tagger cannot be directly applied for other languages POS tagger. With the purpose for developing the Shekki’noono POS tagger, we have used the stochastic Hidden Markov Model. For the study, we have used 1500 sentences collected from different sources such as newspapers (which includes social, economic, and political aspects), modules, textbooks, Radio Programs, and bulletins.  The collected sentences are labeled by language experts with their appropriate parts of speech for each word.  With the experiments carried out, the part of speech tagger is trained on the training sets using Hidden Markov model. As experiments showed, HMM based POS tagging has achieved 92.77 % accuracy for Shekki’noono. And the POS tagger model is compared with the previous experiments in related works using HMM. As a future work, the proposed approaches can be utilized to perform an evaluation on a larger corpus.


2020 ◽  
Vol 49 (4) ◽  
pp. 482-494
Author(s):  
Jurgita Kapočiūtė-Dzikienė ◽  
Senait Gebremichael Tesfagergish

Deep Neural Networks (DNNs) have proven to be especially successful in the area of Natural Language Processing (NLP) and Part-Of-Speech (POS) tagging—which is the process of mapping words to their corresponding POS labels depending on the context. Despite recent development of language technologies, low-resourced languages (such as an East African Tigrinya language), have received too little attention. We investigate the effectiveness of Deep Learning (DL) solutions for the low-resourced Tigrinya language of the Northern-Ethiopic branch. We have selected Tigrinya as the testbed example and have tested state-of-the-art DL approaches seeking to build the most accurate POS tagger. We have evaluated DNN classifiers (Feed Forward Neural Network – FFNN, Long Short-Term Memory method – LSTM, Bidirectional LSTM, and Convolutional Neural Network – CNN) on a top of neural word2vec word embeddings with a small training corpus known as Nagaoka Tigrinya Corpus. To determine the best DNN classifier type, its architecture and hyper-parameter set both manual and automatic hyper-parameter tuning has been performed. BiLSTM method was proved to be the most suitable for our solving task: it achieved the highest accuracy equal to 92% that is 65% above the random baseline.


Author(s):  
Carlos Eduardo Silva ◽  
Lincoln Fernandes

This paper describes COPA-TRAD Version 2.0, a parallel corpus-based system developed at the Universidade Federal de Santa Catarina (UFSC) for translation research, teaching and practice. COPA-TRAD enables the user to investigate the practices of professional translators by identifying translational patterns related to a particular element or linguistic pattern. In addition, the system allows for the comparison between human translation and automatic translation provided by three well-known machine translation systems available on the Internet (Google Translate, Microsoft Translator and Yandex). Currently, COPA-TRAD incorporates five subcorpora (Children's Literature, Literary Texts, Meta-Discourse in Translation, Subtitles and Legal Texts) and provides the following tools: parallel concordancer, monolingual concordancer, wordlist and a DIY Tool that enables the user to create his own parallel disposable corpus. The system also provides a POS-tagging tool interface to analyze and classify the parts of speech of a text.


Analyzing various phases of mood using the verbal form of writing can serve as advancement in the field of psychology. The research work highlighted in this paper focuses on the use of sentiment analysis to predict the emotional state and vulnerability of written statements, as per the most generic perceptions, in the English language, with the help of an algorithm. The text pre-processing step discussed in this work involves cultivating and analyzing each word of user input, analyzing their literal and emotional essences to sum up the mood inclination of the statements and other parts-of-speech, to determine the specific mood and the vulnerability of the writing itself. The vulnerability level of the document is also determined, in order to extent out the purpose towards medical treatments where a vulnerable mindset, suffering from mental illness, depression, perceives the capability to inflict harm upon oneself or others can be given proper help and counseling.


2021 ◽  
Vol 11 (4) ◽  
pp. 1-13
Author(s):  
Arpitha Swamy ◽  
Srinath S.

Parts-of-speech (POS) tagging is a method used to assign the POS tag for every word present in the text, and named entity recognition (NER) is a process to identify the proper nouns in the text and to classify the identified nouns into certain predefined categories. A POS tagger and a NER system for Kannada text have been proposed utilizing conditional random fields (CRFs). The dataset used for POS tagging consists of 147K tokens, where 103K tokens are used for training and the remaining tokens are used for testing. The proposed CRF model for POS tagging of Kannada text obtained 91.3% of precision, 91.6% of recall, and 91.4% of f-score values, respectively. To develop the NER system for Kannada, the data required is created manually using the modified tag-set containing 40 labels. The dataset used for NER system consists of 16.5K tokens, where 70% of the total words are used for training the model, and the remaining 30% of total words are used for model testing. The developed NER model obtained the 94% of precision, 93.9% of recall, and 93.9% of F1-measure values, respectively.


ReCALL ◽  
2007 ◽  
Vol 19 (3) ◽  
pp. 252-268 ◽  
Author(s):  
Sylviane Granger ◽  
Olivier Kraif ◽  
Claude Ponton ◽  
Georges Antoniadis ◽  
Virginie Zampa

AbstractLearner corpora, electronic collections of spoken or written data from foreign language learners, offer unparalleled access to many hitherto uncovered aspects of learner language, particularly in their error-tagged format. This article aims to demonstrate the role that the learner corpus can play in CALL, particularly when used in conjunction with web-based interfaces which provide flexible access to error-tagged corpora that have been enhanced with simple NLP techniques such as POS-tagging or lemmatization and linked to a wide range of learner and task variables such as mother tongue background or activity type. This new resource is of interest to three main types of users: teachers wishing to prepare pedagogical materials that target learners' attested difficulties; learners themselves for editing or language awareness purposes and NLP researchers, for whom it serves as a benchmark for testing automatic error detection systems.


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Jiazhen He ◽  
Huifang You ◽  
Emil Sandström ◽  
Eva Nittinger ◽  
Esben Jannik Bjerrum ◽  
...  

AbstractA main challenge in drug discovery is finding molecules with a desirable balance of multiple properties. Here, we focus on the task of molecular optimization, where the goal is to optimize a given starting molecule towards desirable properties. This task can be framed as a machine translation problem in natural language processing, where in our case, a molecule is translated into a molecule with optimized properties based on the SMILES representation. Typically, chemists would use their intuition to suggest chemical transformations for the starting molecule being optimized. A widely used strategy is the concept of matched molecular pairs where two molecules differ by a single transformation. We seek to capture the chemist’s intuition from matched molecular pairs using machine translation models. Specifically, the sequence-to-sequence model with attention mechanism, and the Transformer model are employed to generate molecules with desirable properties. As a proof of concept, three ADMET properties are optimized simultaneously: logD, solubility, and clearance, which are important properties of a drug. Since desirable properties often vary from project to project, the user-specified desirable property changes are incorporated into the input as an additional condition together with the starting molecules being optimized. Thus, the models can be guided to generate molecules satisfying the desirable properties. Additionally, we compare the two machine translation models based on the SMILES representation, with a graph-to-graph translation model HierG2G, which has shown the state-of-the-art performance in molecular optimization. Our results show that the Transformer can generate more molecules with desirable properties by making small modifications to the given starting molecules, which can be intuitive to chemists. A further enrichment of diverse molecules can be achieved by using an ensemble of models.


Sign in / Sign up

Export Citation Format

Share Document