A Simple Connectionist Approach to Language Understanding in a Dialogue System

Author(s):  
María José Castro ◽  
Emilio Sanchis
2021 ◽  
Author(s):  
Philippe Blache ◽  
Matthis Houlès

This paper presents a dialogue system for training doctors to break bad news. The originality of this work lies in its knowledge representation. All information known before the dialogue (the universe of discourse, the context, the scenario of the dialogue) as well as the knowledge transferred from the doctor to the patient during the conversation is represented in a shared knowledge structure called common ground, that constitute the core of the system. The Natural Language Understanding and the Natural Language Generation modules of the system take advantage on this structure and we present in this paper different original techniques making it possible to implement them efficiently.


Author(s):  
Nihal Potdar ◽  
Anderson Raymundo Avila ◽  
Chao Xing ◽  
Dong Wang ◽  
Yiran Cao ◽  
...  

End-to-end spoken language understanding (SLU) recently attracted increasing interest. Compared to the conventional tandem-based approach that combines speech recognition and language understanding as separate modules, the new approach extracts users' intentions directly from the speech signals, resulting in joint optimization and low latency. Such an approach, however, is typically designed to process one intent at a time, which leads users to have to take multiple rounds to fulfill their requirements while interacting with a dialogue system. In this paper, we propose a streaming end-to-end framework that can process multiple intentions in an online and incremental way. The backbone of our framework is a unidirectional RNN trained with the connectionist temporal classification (CTC) criterion. By this design, an intention can be identified when sufficient evidence has been accumulated, and multiple intentions will be identified sequentially. We evaluate our solution on the Fluent Speech Commands (FSC) dataset and the detection accuracy is about 97 % on all multi-intent settings. This result is comparable to the performance of the state-of-the-art non-streaming models, but is achieved in an online and incremental way. We also employ our model to an keyword spotting task using the Google Speech Commands dataset, and the results are also highly promising.


Author(s):  
Robert Dale

A spoken language dialogue system is a computational system which engages in multi-turn dialogic interaction with human users using speech as the means of communication. Any system which does this requires many of the capabilities of other natural language processing applications, encompassing both language understanding and language generation; but the nature of spoken language dialogue means that it also poses a number of additional challenges not faced by other applications. This chapter discusses a number of key issues that need to be addressed when attempting to build systems that can engage in natural dialogue, and provides an overview of existing research in these areas.


2019 ◽  
Vol 1 (2) ◽  
pp. 176-186
Author(s):  
Yadi Lao ◽  
Weijie Liu ◽  
Sheng Gao ◽  
Si Li

One of the major challenges to build a task-oriented dialogue system is that dialogue state transition frequently happens between multiple domains such as booking hotels or restaurants. Recently, the encoderdecoder model based on the end-to-end neural network has become an attractive approach to meet this challenge. However, it usually requires a sufficiently large amount of training data and it is not flexible to handle dialogue state transition. This paper addresses these problems by proposing a simple but practical framework called Multi-Domain KB-BOT (MDKB-BOT), which leverages both neural networks and rule-based strategy in natural language understanding (NLU) and dialogue management (DM). Experiments on the data set of the Chinese Human-Computer Dialogue Technology Evaluation Campaign show that MDKB-BOT achieves competitive performance on several evaluation metrics, including task completion rate and user satisfaction.


Author(s):  
Robert Dale

A spoken language dialogue system is a computational system which engages in multi-turn dialogic interaction with human users using speech as the means of communication. Any system which does this requires many of the capabilities of other natural language processing applications, encompassing both language understanding and language generation; but the nature of spoken language dialogue means that it also poses a number of additional challenges not faced by other applications. This chapter discusses a number of key issues that need to be addressed when attempting to build systems that can engage in natural dialogue, and provides an overview of existing research in these areas.


2021 ◽  
Vol 7 ◽  
pp. e615
Author(s):  
Javeria Hassan ◽  
Muhammad Ali Tahir ◽  
Adnan Ali

Navigation based task-oriented dialogue systems provide users with a natural way of communicating with maps and navigation software. Natural language understanding (NLU) is the first step for a task-oriented dialogue system. It extracts the important entities (slot tagging) from the user’s utterance and determines the user’s objective (intent determination). Word embeddings are the distributed representations of the input sentence, and encompass the sentence’s semantic and syntactic representations. We created the word embeddings using different methods like FastText, ELMO, BERT and XLNET; and studied their effect on the natural language understanding output. Experiments are performed on the Roman Urdu navigation utterances dataset. The results show that for the intent determination task XLNET based word embeddings outperform other methods; while for the task of slot tagging FastText and XLNET based word embeddings have much better accuracy in comparison to other approaches.


Sign in / Sign up

Export Citation Format

Share Document