End-to-End Task-Oriented Dialogue System with Distantly Supervised Knowledge Base Retriever

Author(s):  
Libo Qin ◽  
Yijia Liu ◽  
Wanxiang Che ◽  
Haoyang Wen ◽  
Ting Liu
2019 ◽  
Vol 23 (3) ◽  
pp. 1989-2002 ◽  
Author(s):  
Haotian Xu ◽  
Haiyun Peng ◽  
Haoran Xie ◽  
Erik Cambria ◽  
Liuyang Zhou ◽  
...  

2019 ◽  
Author(s):  
Libo Qin ◽  
Yijia Liu ◽  
Wanxiang Che ◽  
Haoyang Wen ◽  
Yangming Li ◽  
...  

Author(s):  
Dinesh Raghu ◽  
Atishya Jain ◽  
Mausam ◽  
Sachindra Joshi

2017 ◽  
Author(s):  
Tsung-Hsien Wen ◽  
David Vandyke ◽  
Nikola Mrkšić ◽  
Milica Gasic ◽  
Lina M. Rojas Barahona ◽  
...  

Author(s):  
Yan Peng ◽  
Penghe Chen ◽  
Yu Lu ◽  
Qinggang Meng ◽  
Qi Xu ◽  
...  

Author(s):  
Bowen Zhang ◽  
Xiaofei Xu ◽  
Xutao Li ◽  
Yunming Ye ◽  
Xiaojun Chen ◽  
...  
Keyword(s):  

Author(s):  
Silin Gao ◽  
Ryuichi Takanobu ◽  
Wei Peng ◽  
Qun Liu ◽  
Minlie Huang

Author(s):  
Florian Strub ◽  
Harm de Vries ◽  
Jérémie Mary ◽  
Bilal Piot ◽  
Aaron Courville ◽  
...  

End-to-end design of dialogue systems has recently become a popular research topic thanks to powerful tools such as encoder-decoder architectures for sequence-to-sequence learning. Yet, most current approaches cast human-machine dialogue management as a supervised learning problem, aiming at predicting the next utterance of a participant given the full history of the dialogue. This vision may fail to correctly render the planning problem inherent to dialogue as well as its contextual and grounded nature. In this paper, we introduce a Deep Reinforcement Learning method to optimize visually grounded task-oriented dialogues, based on the policy gradient algorithm. This approach is tested on the question generation task from the dataset GuessWhat?! containing 120k dialogues and provides encouraging results at solving both the problem of generating natural dialogues and the task of discovering a specific object in a complex picture.


Sign in / Sign up

Export Citation Format

Share Document