Generative Dialogue System Using Neural Network

2019 ◽  
Author(s):  
Nikki Singh ◽  
Sachin Bojewar
Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Hai Liu ◽  
Yuanxia Liu ◽  
Leung-Pun Wong ◽  
Lap-Kei Lee ◽  
Tianyong Hao

User intent classification is a vital component of a question-answering system or a task-based dialogue system. In order to understand the goals of users’ questions or discourses, the system categorizes user text into a set of pre-defined user intent categories. User questions or discourses are usually short in length and lack sufficient context; thus, it is difficult to extract deep semantic information from these types of text and the accuracy of user intent classification may be affected. To better identify user intents, this paper proposes a BERT-Cap hybrid neural network model with focal loss for user intent classification to capture user intents in dialogue. The model uses multiple transformer encoder blocks to encode user utterances and initializes encoder parameters with a pre-trained BERT. Then, it extracts essential features using a capsule network with dynamic routing after utterances encoding. Experiment results on four publicly available datasets show that our model BERT-Cap achieves a F1 score of 0.967 and an accuracy of 0.967, outperforming a number of baseline methods, indicating its effectiveness in user intent classification.


2021 ◽  
Vol 2037 (1) ◽  
pp. 012047
Author(s):  
Xiaodong Shi ◽  
Yicheng Sun ◽  
Yang Ding ◽  
Dong Han ◽  
Jingyang Li

Author(s):  
Alexander Prange ◽  
Michael Barz ◽  
Daniel Sonntag

We present a speech dialogue system that facilitates medical decision support for doctors in a virtual reality (VR) application. The therapy prediction is based on a recurrent neural network model that incorporates the examination history of patients. A central supervised patient database provides input to our predictive model and allows us, first, to add new examination reports by a pen-based mobile application on-the-fly, and second, to get therapy prediction results in real-time. This demo includes a visualisation of patient records, radiology image data, and the therapy prediction results in VR.


Information ◽  
2019 ◽  
Vol 10 (5) ◽  
pp. 161 ◽  
Author(s):  
Kaspars Balodis ◽  
Daiga Deksne

Intent detection is one of the main tasks of a dialogue system. In this paper, we present our intent detection system that is based on fastText word embeddings and a neural network classifier. We find an improvement in fastText sentence vectorization, which, in some cases, shows a significant increase in intent detection accuracy. We evaluate the system on languages commonly spoken in Baltic countries—Estonian, Latvian, Lithuanian, English, and Russian. The results show that our intent detection system provides state-of-the-art results on three previously published datasets, outperforming many popular services. In addition to this, for Latvian, we explore how the accuracy of intent detection is affected if we normalize the text in advance.


2018 ◽  
Vol 20 (2) ◽  
pp. 293-310
Author(s):  
In-Young Jhee ◽  
◽  
Hee-Dong Kim

2019 ◽  
Vol 20 (11) ◽  
pp. 686-695
Author(s):  
Yin Shuai ◽  
A. S. Yuschenko

The article discusses the system of dialogue control manipulation robots. The analysis of the basic methods of automatic speech recognition, speech understanding, dialogue management, voice response synthesis in dialogue systems has been carried out. Three types of dialogue management are considered as "system initiative", "user initiative" and "combined initiative". A system of object-oriented dialog control of a robot based on the theory of finite state machines with using a deep neural network is proposed. The main difference of the proposed system lies in the separate implementation of the dialogue process and robot’s actions, which is close to the pace of natural dialogue control. This method of constructing a dialogue control robot allows system to automatically correct the result of speech recognition, robot’s actions based on tasks. The necessity of correcting the result of speech recognition and robot’s actions may be caused by the users’ accent, working environment noise or incorrect voice commands. The process of correcting speech recognition results and robot’s actions consists of three stages, respectively, in a special mode and a general mode. The special mode allows users to directly control the manipulator by voice commands. The general mode extends the capabilities of users, allowing them to get additional information in real time. At the first stage, continuous speech recognition is built by using a deep neural network, taking into account the accents and speech speeds of various users. Continuous speech recognition is a real-time voice to text conversion. At the second stage, the correction of the speech recognition result by managing the dialogue based on the theory of finite automata. At the third stage, the actions of the robot are corrected depending on the operating state of the robot and the dialogue management process. In order to realize a natural dialogue between users and robots, the problem is solved in creating a small database of possible dialogues and using various training data. In the experiments, the dialogue system is used to control the KUKA manipulator (KRC4 control) to put the desired block in the specified location, implemented in the Python environment using the RoboDK software. The processes and results of experiments confirming the operability of the interactive robot control system are given. A fairly high accuracy (92 %) and an automatic speech recognition rate close to the rate of natural speech were obtained.


Author(s):  
Xiaowei Tong ◽  
Zhenxin Fu ◽  
Mingyue Shang ◽  
Dongyan Zhao ◽  
Rui Yan

Automatic evaluating the performance of Open-domain dialogue system is a challenging problem. Recent work in neural network-based metrics has shown promising opportunities for automatic dialogue evaluation. However, existing methods mainly focus on monolingual evaluation, in which the trained metric is not flexible enough to transfer across different languages. To address this issue, we propose an adversarial multi-task neural metric (ADVMT) for multi-lingual dialogue evaluation, with shared feature extraction across languages. We evaluate the proposed model in two different languages. Experiments show that the adversarial multi-task neural metric achieves a high correlation with human annotation, which yields better performance than monolingual ones and various existing metrics.


2000 ◽  
Vol 25 (4) ◽  
pp. 325-325
Author(s):  
J.L.N. Roodenburg ◽  
H.J. Van Staveren ◽  
N.L.P. Van Veen ◽  
O.C. Speelman ◽  
J.M. Nauta ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document