scholarly journals Semi-Supervised Learning of Statistical Models for Natural Language Understanding

2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Deyu Zhou ◽  
Yulan He

Natural language understanding is to specify a computational model that maps sentences to their semantic mean representation. In this paper, we propose a novel framework to train the statistical models without using expensive fully annotated data. In particular, the input of our framework is a set of sentences labeled with abstract semantic annotations. These annotations encode the underlying embedded semantic structural relations without explicit word/semantic tag alignment. The proposed framework can automatically induce derivation rules that map sentences to their semantic meaning representations. The learning framework is applied on two statistical models, the conditional random fields (CRFs) and the hidden Markov support vector machines (HM-SVMs). Our experimental results on the DARPA communicator data show that both CRFs and HM-SVMs outperform the baseline approach, previously proposed hidden vector state (HVS) model which is also trained on abstract semantic annotations. In addition, the proposed framework shows superior performance than two other baseline approaches, a hybrid framework combining HVS and HM-SVMs and discriminative training of HVS, with a relative error reduction rate of about 25% and 15% being achieved inF-measure.

2021 ◽  
Vol 11 (22) ◽  
pp. 10995
Author(s):  
Samir Rustamov ◽  
Aygul Bayramova ◽  
Emin Alasgarov

Rapid increase in conversational AI and user chat data lead to intensive development of dialogue management systems (DMS) for various industries. Yet, for low-resource languages, such as Azerbaijani, very little research has been conducted. The main purpose of this work is to experiment with various DMS pipeline set-ups to decide on the most appropriate natural language understanding and dialogue manager settings. In our project, we designed and evaluated different DMS pipelines with respect to the conversational text data obtained from one of the leading retail banks in Azerbaijan. In the work, the main two components of DMS—Natural language Understanding (NLU) and Dialogue Manager—have been investigated. In the first step of NLU, we utilized a language identification (LI) component for language detection. We investigated both built-in LI methods such as fastText and custom machine learning (ML) models trained on the domain-based dataset. The second step of the work was a comparison of the classic ML classifiers (logistic regression, neural networks, and SVM) and Dual Intent and Entity Transformer (DIET) architecture for user intention detection. In these experiments we used different combinations of feature extractors such as CountVectorizer, Term Frequency-Inverse Document Frequency (TF-IDF) Vectorizer, and word embeddings for both word and character n-gram based tokens. To extract important information from the text messages, Named Entity Extraction (NER) component was added to the pipeline. The best NER model was chosen among conditional random fields (CRF) tagger, deep neural networks (DNN), models and build in entity extraction component inside DIET architecture. Obtained entity tags fed to the Dialogue Management module as features. All NLU set-ups were followed by the Dialogue Management module that contains a Rule-based Policy to handle FAQs and chitchats as well as a Transformer Embedding Dialogue (TED) Policy to handle more complex and unexpected dialogue inputs. As a result, we suggest a DMS pipeline for a financial assistant, which is capable of identifying intentions, named entities, and a language of text followed by policies that allow generating a proper response (based on the designed dialogues) and suggesting the best next action.


Author(s):  
Dinda Ayu Permatasari ◽  
Devira Anggi Maharani

At present, some popular messaging applications have evolved specifically with bots starting to emerge into development. One of the developments of chatbots is to help humans booking flight with Named Entity Recognition in the text, trace sentences to detect user intentions, and respond even though the context of the conversation domain is limited. This study proposes to conduct analysis and design chatbot interactions using NLU (Natural Language Understanding) with the aim that the bot understands what is meant by the user and provides the best and right response. Classification using Support Vector Machine (SVM) method with (erm Frequency-Inverse Document Frequency (TF-IDF) feature extraction is suitable combination methods that produce the highest accuracy value up to 97.5%. Conversation dialogue on chatbots developed using NLU which consists of NER and intent classification then dialog manager using Reinforcement Learning could make a low cost for computing in chatbots.


1998 ◽  
Vol 37 (04/05) ◽  
pp. 327-333 ◽  
Author(s):  
F. Buekens ◽  
G. De Moor ◽  
A. Waagmeester ◽  
W. Ceusters

AbstractNatural language understanding systems have to exploit various kinds of knowledge in order to represent the meaning behind texts. Getting this knowledge in place is often such a huge enterprise that it is tempting to look for systems that can discover such knowledge automatically. We describe how the distinction between conceptual and linguistic semantics may assist in reaching this objective, provided that distinguishing between them is not done too rigorously. We present several examples to support this view and argue that in a multilingual environment, linguistic ontologies should be designed as interfaces between domain conceptualizations and linguistic knowledge bases.


1995 ◽  
Vol 34 (04) ◽  
pp. 345-351 ◽  
Author(s):  
A. Burgun ◽  
L. P. Seka ◽  
D. Delamarre ◽  
P. Le Beux

Abstract:In medicine, as in other domains, indexing and classification is a natural human task which is used for information retrieval and representation. In the medical field, encoding of patient discharge summaries is still a manual time-consuming task. This paper describes an automated coding system of patient discharge summaries from the field of coronary diseases into the ICD-9-CM classification. The system is developed in the context of the European AIM MENELAS project, a natural-language understanding system which uses the conceptual-graph formalism. Indexing is performed by using a two-step processing scheme; a first recognition stage is implemented by a matching procedure and a secondary selection stage is made according to the coding priorities. We show the general features of the necessary translation of the classification terms in the conceptual-graph model, and for the coding rules compliance. An advantage of the system is to provide an objective evaluation and assessment procedure for natural-language understanding.


Sign in / Sign up

Export Citation Format

Share Document