scholarly journals Contextualize Knowledge Bases with Transformer for End-to-end Task-Oriented Dialogue Systems

Author(s):  
Yanjie Gou ◽  
Yinjie Lei ◽  
Lingqiao Liu ◽  
Yong Dai ◽  
Chunxu Shen
Author(s):  
Florian Strub ◽  
Harm de Vries ◽  
Jérémie Mary ◽  
Bilal Piot ◽  
Aaron Courville ◽  
...  

End-to-end design of dialogue systems has recently become a popular research topic thanks to powerful tools such as encoder-decoder architectures for sequence-to-sequence learning. Yet, most current approaches cast human-machine dialogue management as a supervised learning problem, aiming at predicting the next utterance of a participant given the full history of the dialogue. This vision may fail to correctly render the planning problem inherent to dialogue as well as its contextual and grounded nature. In this paper, we introduce a Deep Reinforcement Learning method to optimize visually grounded task-oriented dialogues, based on the policy gradient algorithm. This approach is tested on the question generation task from the dataset GuessWhat?! containing 120k dialogues and provides encouraging results at solving both the problem of generating natural dialogues and the task of discovering a specific object in a complex picture.


Author(s):  
Shiquan Yang ◽  
Rui Zhang ◽  
Sarah M. Erfani ◽  
Jey Han Lau

Knowledge bases (KBs) are usually essential for building practical dialogue systems. Recently we have seen rapidly growing interest in integrating knowledge bases into dialogue systems. However, existing approaches mostly deal with knowledge bases of a single modality, typically textual information. As today's knowledge bases become abundant with multimodal information such as images, audios and videos, the limitation of existing approaches greatly hinders the development of dialogue systems. In this paper, we focus on task-oriented dialogue systems and address this limitation by proposing a novel model that integrates external multimodal KB reasoning with pre-trained language models. We further enhance the model via a novel multi-granularity fusion mechanism to capture multi-grained semantics in the dialogue history. To validate the effectiveness of the proposed model, we collect a new large-scale (14K) dialogue dataset MMDialKB, built upon multimodal KB. Both automatic and human evaluation results on MMDialKB demonstrate the superiority of our proposed framework over strong baselines.


2018 ◽  
Author(s):  
Bing Liu ◽  
Gokhan Tür ◽  
Dilek Hakkani-Tür ◽  
Pararth Shah ◽  
Larry Heck

Author(s):  
Andrea Madotto ◽  
Samuel Cahyawijaya ◽  
Genta Indra Winata ◽  
Yan Xu ◽  
Zihan Liu ◽  
...  

2021 ◽  
Author(s):  
Qingyue Wang ◽  
Yanan Cao ◽  
Junyan Jiang ◽  
Yafang Wang ◽  
Lingling Tong ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document