scholarly journals PHRASE STRUCTURE BASED ENGLISH TO KANNADA SENTENCE TRANSLATION

Author(s):  
SHARANBASAPPA HONNASHETTY ◽  
MALLAMMA V REDDY ◽  
DR. M. HANUMANTHAPPA

In order to build a natural language processing system first the words are placed into a structured form that leads to a syntactically correct sentence. Syntactic analysis of a sentence is performed by parsing technique. This paper explores the novel approach that how the shift reduce parsing technique is used for translating English sentences into a grammatically correct Kannada sentences by reordering of English parse tree structure, generating and implementing phrase structure grammar(PSG) for kannada sentences. Recursive Descent Parsing technique is used to generate English phrase tree structure and terminal symbols are tagged with Kannada equivalent words then Shift-Reduce Parsing technique is used to construct a Kannada sentence. Part-of-Speech (POS) tagger is used to tag Kannada words to English words. It is implemented by using supervised machine learning approach

2018 ◽  
Vol 25 (10) ◽  
pp. 1339-1350 ◽  
Author(s):  
Justin Mower ◽  
Devika Subramanian ◽  
Trevor Cohen

Abstract Objective The aim of this work is to leverage relational information extracted from biomedical literature using a novel synthesis of unsupervised pretraining, representational composition, and supervised machine learning for drug safety monitoring. Methods Using ≈80 million concept-relationship-concept triples extracted from the literature using the SemRep Natural Language Processing system, distributed vector representations (embeddings) were generated for concepts as functions of their relationships utilizing two unsupervised representational approaches. Embeddings for drugs and side effects of interest from two widely used reference standards were then composed to generate embeddings of drug/side-effect pairs, which were used as input for supervised machine learning. This methodology was developed and evaluated using cross-validation strategies and compared to contemporary approaches. To qualitatively assess generalization, models trained on the Observational Medical Outcomes Partnership (OMOP) drug/side-effect reference set were evaluated against a list of ≈1100 drugs from an online database. Results The employed method improved performance over previous approaches. Cross-validation results advance the state of the art (AUC 0.96; F1 0.90 and AUC 0.95; F1 0.84 across the two sets), outperforming methods utilizing literature and/or spontaneous reporting system data. Examination of predictions for unseen drug/side-effect pairs indicates the ability of these methods to generalize, with over tenfold label support enrichment in the top 100 predictions versus the bottom 100 predictions. Discussion and Conclusion Our methods can assist the pharmacovigilance process using information from the biomedical literature. Unsupervised pretraining generates a rich relationship-based representational foundation for machine learning techniques to classify drugs in the context of a putative side effect, given known examples.


2020 ◽  
Vol 17 (4) ◽  
pp. 1842-1846
Author(s):  
Praveen Edward James ◽  
Mun Hou Kit ◽  
Chockalingam Aravind Vaithilingam ◽  
Alan Tan Wee Chiat

Natural Language Processing (NLP) systems involve Natural Language Understanding (NLU), Dialogue Management (DM) and Natural Language Generation (NLG). The purpose of this work involves integrating learning with examples and rule-based processing to design an NLP system. The design involves a three-stage processing framework, which combines syntactic generation, semantic extraction and a strong rule-based control. The syntactic generator generates syntax by aligning sentences with Part-of-Speech (POS) tags limited by the number of words in the lexicon. The semantic extractor extracts meaningful keywords from the queries raised. The above two modules are controlled by generalized rules by the rule-based controller module. The system is evaluated under different domains. The results reveal that the accuracy of the system is 92.33% on an average. The design process is simple, and the processing time is 2.12 seconds, which is minimal compared to similar statistical models. The performance of an NLP tool in a certain task can be estimated by the quality of its predictions on the classification of unseen data. The results reveal similar performance with existing systems indicating the possibility of usage for similar tasks. The system supports a vocabulary of about 700 words and can be used as an NLP module in a spoken dialogue system for various domains or task areas.


2021 ◽  
pp. 1-42
Author(s):  
Maha J. Althobaiti

Abstract The wide usage of multiple spoken Arabic dialects on social networking sites stimulates increasing interest in Natural Language Processing (NLP) for dialectal Arabic (DA). Arabic dialects represent true linguistic diversity and differ from modern standard Arabic (MSA). In fact, the complexity and variety of these dialects make it insufficient to build one NLP system that is suitable for all of them. In comparison with MSA, the available datasets for various dialects are generally limited in terms of size, genre and scope. In this article, we present a novel approach that automatically develops an annotated country-level dialectal Arabic corpus and builds lists of words that encompass 15 Arabic dialects. The algorithm uses an iterative procedure consisting of two main components: automatic creation of lists for dialectal words and automatic creation of annotated Arabic dialect identification corpus. To our knowledge, our study is the first of its kind to examine and analyse the poor performance of the MSA part-of-speech tagger on dialectal Arabic contents and to exploit that in order to extract the dialectal words. The pointwise mutual information association measure and the geographical frequency of word occurrence online are used to classify dialectal words. The annotated dialectal Arabic corpus (Twt15DA), built using our algorithm, is collected from Twitter and consists of 311,785 tweets containing 3,858,459 words in total. We randomly selected a sample of 75 tweets per country, 1125 tweets in total, and conducted a manual dialect identification task by native speakers. The results show an average inter-annotator agreement score equal to 64%, which reflects satisfactory agreement considering the overlapping features of the 15 Arabic dialects.


Sign in / Sign up

Export Citation Format

Share Document