scholarly journals End-to-end optimization of goal-driven and visually grounded dialogue systems

Author(s):  
Florian Strub ◽  
Harm de Vries ◽  
Jérémie Mary ◽  
Bilal Piot ◽  
Aaron Courville ◽  
...  

End-to-end design of dialogue systems has recently become a popular research topic thanks to powerful tools such as encoder-decoder architectures for sequence-to-sequence learning. Yet, most current approaches cast human-machine dialogue management as a supervised learning problem, aiming at predicting the next utterance of a participant given the full history of the dialogue. This vision may fail to correctly render the planning problem inherent to dialogue as well as its contextual and grounded nature. In this paper, we introduce a Deep Reinforcement Learning method to optimize visually grounded task-oriented dialogues, based on the policy gradient algorithm. This approach is tested on the question generation task from the dataset GuessWhat?! containing 120k dialogues and provides encouraging results at solving both the problem of generating natural dialogues and the task of discovering a specific object in a complex picture.

2021 ◽  
Author(s):  
Yanjie Gou ◽  
Yinjie Lei ◽  
Lingqiao Liu ◽  
Yong Dai ◽  
Chunxu Shen

2018 ◽  
Author(s):  
Bing Liu ◽  
Gokhan Tür ◽  
Dilek Hakkani-Tür ◽  
Pararth Shah ◽  
Larry Heck

Author(s):  
Geoffrey Leech

This article introduces the linguistic subdiscipline of pragmatics and shows how this is being applied to the development of spoken dialogue systems — currently perhaps the most important applications area for computational pragmatics. It traces the history of pragmatics from its philosophical roots, and outlines some key notions of theoretical pragmatics — speech acts, illocutionary force, the cooperative principle and relevance. It then discusses the application of pragmatics to dialogue modelling, especially the development of spoken dialogue systems intended to interact with human beings in task-oriented scenarios such as providing travel information and shows how and why computational pragmatics differs from ‘linguistic’ pragmatics, and how pragmatics contributes to the computational analysis of dialogues. One major illustration of this is the application of speech act theory in the analysis and synthesis of service interactions in terms of dialogue acts.


2020 ◽  
Vol 34 (09) ◽  
pp. 13622-13623
Author(s):  
Zhaojiang Lin ◽  
Peng Xu ◽  
Genta Indra Winata ◽  
Farhad Bin Siddique ◽  
Zihan Liu ◽  
...  

We present CAiRE, an end-to-end generative empathetic chatbot designed to recognize user emotions and respond in an empathetic manner. Our system adapts the Generative Pre-trained Transformer (GPT) to empathetic response generation task via transfer learning. CAiRE is built primarily to focus on empathy integration in fully data-driven generative dialogue systems. We create a web-based user interface which allows multiple users to asynchronously chat with CAiRE. CAiRE also collects user feedback and continues to improve its response quality by discarding undesirable generations via active learning and negative training.


2021 ◽  
Author(s):  
Qingyue Wang ◽  
Yanan Cao ◽  
Junyan Jiang ◽  
Yafang Wang ◽  
Lingling Tong ◽  
...  

2021 ◽  
Vol 36 ◽  
Author(s):  
Arushi Jain ◽  
Khimya Khetarpal ◽  
Doina Precup

Abstract Designing hierarchical reinforcement learning algorithms that exhibit safe behaviour is not only vital for practical applications but also facilitates a better understanding of an agent’s decisions. We tackle this problem in the options framework (Sutton, Precup & Singh, 1999), a particular way to specify temporally abstract actions which allow an agent to use sub-policies with start and end conditions. We consider a behaviour as safe that avoids regions of state space with high uncertainty in the outcomes of actions. We propose an optimization objective that learns safe options by encouraging the agent to visit states with higher behavioural consistency. The proposed objective results in a trade-off between maximizing the standard expected return and minimizing the effect of model uncertainty in the return. We propose a policy gradient algorithm to optimize the constrained objective function. We examine the quantitative and qualitative behaviours of the proposed approach in a tabular grid world, continuous-state puddle world, and three games from the Arcade Learning Environment: Ms. Pacman, Amidar, and Q*Bert. Our approach achieves a reduction in the variance of return, boosts performance in environments with intrinsic variability in the reward structure, and compares favourably both with primitive actions and with risk-neutral options.


Sign in / Sign up

Export Citation Format

Share Document