Designing ECAs to Improve Robustness of Human-Machine Dialogue

Author(s):  
Beatriz López Mencía ◽  
David D. Pardo ◽  
Alvaro Hernández Trapote ◽  
Luis A. Hernández Gómez

One of the major challenges for dialogue systems deployed in commercial applications is to improve robustness when common low-level problems occur that are related with speech recognition. We first discuss this important family of interaction problems, and then we discuss the features of non-verbal, visual, communication that Embodied Conversational Agents (ECAs) bring ‘into the picture’ and which may be tapped into to improve spoken dialogue robustness and the general smoothness and efficiency of the interaction between the human and the machine. Our approach is centred around the information provided by ECAs. We deal with all stages of the conversation system development process, from scenario description, to gesture design and evaluation with comparative user tests. We conclude that ECAs can help improve the robustness of, as well as the users’ subjective experience with, a dialogue system. However, they may also make users more demanding and intensify privacy and security concerns.

2006 ◽  
Vol 32 (3) ◽  
pp. 417-438 ◽  
Author(s):  
Diane Litman ◽  
Julia Hirschberg ◽  
Marc Swerts

This article focuses on the analysis and prediction of corrections, defined as turns where a user tries to correct a prior error made by a spoken dialogue system. We describe our labeling procedure of various corrections types and statistical analyses of their features in a corpus collected from a train information spoken dialogue system. We then present results of machine-learning experiments designed to identify user corrections of speech recognition errors. We investigate the predictive power of features automatically computable from the prosody of the turn, the speech recognition process, experimental conditions, and the dialogue history. Our best-performing features reduce classification error from baselines of 25.70–28.99% to 15.72%.


2021 ◽  
Vol 12 (2) ◽  
pp. 1-33
Author(s):  
Mauajama Firdaus ◽  
Nidhi Thakur ◽  
Asif Ekbal

Multimodality in dialogue systems has opened up new frontiers for the creation of robust conversational agents. Any multimodal system aims at bridging the gap between language and vision by leveraging diverse and often complementary information from image, audio, and video, as well as text. For every task-oriented dialog system, different aspects of the product or service are crucial for satisfying the user’s demands. Based upon the aspect, the user decides upon selecting the product or service. The ability to generate responses with the specified aspects in a goal-oriented dialogue setup facilitates user satisfaction by fulfilling the user’s goals. Therefore, in our current work, we propose the task of aspect controlled response generation in a multimodal task-oriented dialog system. We employ a multimodal hierarchical memory network for generating responses that utilize information from both text and images. As there was no readily available data for building such multimodal systems, we create a Multi-Domain Multi-Modal Dialog (MDMMD++) dataset. The dataset comprises the conversations having both text and images belonging to the four different domains, such as hotels, restaurants, electronics, and furniture. Quantitative and qualitative analysis on the newly created MDMMD++ dataset shows that the proposed methodology outperforms the baseline models for the proposed task of aspect controlled response generation.


Author(s):  
Alexandru-Lucian Georgescu ◽  
Alessandro Pappalardo ◽  
Horia Cucu ◽  
Michaela Blott

AbstractThe last decade brought significant advances in automatic speech recognition (ASR) thanks to the evolution of deep learning methods. ASR systems evolved from pipeline-based systems, that modeled hand-crafted speech features with probabilistic frameworks and generated phone posteriors, to end-to-end (E2E) systems, that translate the raw waveform directly into words using one deep neural network (DNN). The transcription accuracy greatly increased, leading to ASR technology being integrated into many commercial applications. However, few of the existing ASR technologies are suitable for integration in embedded applications, due to their hard constrains related to computing power and memory usage. This overview paper serves as a guided tour through the recent literature on speech recognition and compares the most popular ASR implementations. The comparison emphasizes the trade-off between ASR performance and hardware requirements, to further serve decision makers in choosing the system which fits best their embedded application. To the best of our knowledge, this is the first study to provide this kind of trade-off analysis for state-of-the-art ASR systems.


Author(s):  
PHILIPPE MORIN ◽  
JEAN-PAUL HATON ◽  
JEAN-MARIE PIERREL ◽  
GUENTHER RUSKE ◽  
WALTER WEIGEL

In the framework of man-machine communication, oral dialogue has a particular place since human speech presents several advantages when used either alone or in multimedia interfaces. The last decade has witnessed a proliferation of research into speech recognition and understanding, but few systems have been defined with a view to managing and understanding an actual man-machine dialogue. The PARTNER system that we describe in this paper proposes a solution in the case of task oriented dialogue with the use of artificial languages. A description of the essential characteristics of dialogue systems is followed by a presentation of the architecture and the principles of the PARTNER system. Finally, we present the most recent results obtained in the oral management of electronic mail in French and German.


Author(s):  
Ayda Saidane ◽  
Saleh Al-Sharieh

Regulatory compliance is a top priority for organizations in highly regulated ecosystems. As most operations are automated, the compliance efforts focus on the information systems supporting the business processes of the organizations and, to a lesser extent, on the humans using, managing, and maintaining them. Yet, the human factor is an unpredictable and challenging component of a secure system development and should be considered throughout the development process as both a legitimate user and a threat. In this chapter, the authors propose COMPARCH as a compliance-driven system engineering framework for privacy and security in socio-technical systems. It consists of (1) a risk-based requirement management process, (2) a test-driven security and privacy modeling framework, and (3) a simulation-based validation approach. The satisfaction of the regulatory requirements is evaluated through the simulation traces analysis. The authors use as a running example an E-CITY system providing municipality services to local communities.


Author(s):  
Lin Xu ◽  
Qixian Zhou ◽  
Ke Gong ◽  
Xiaodan Liang ◽  
Jianheng Tang ◽  
...  

Beyond current conversational chatbots or task-oriented dialogue systems that have attracted increasing attention, we move forward to develop a dialogue system for automatic medical diagnosis that converses with patients to collect additional symptoms beyond their self-reports and automatically makes a diagnosis. Besides the challenges for conversational dialogue systems (e.g. topic transition coherency and question understanding), automatic medical diagnosis further poses more critical requirements for the dialogue rationality in the context of medical knowledge and symptom-disease relations. Existing dialogue systems (Madotto, Wu, and Fung 2018; Wei et al. 2018; Li et al. 2017) mostly rely on datadriven learning and cannot be able to encode extra expert knowledge graph. In this work, we propose an End-to-End Knowledge-routed Relational Dialogue System (KR-DS) that seamlessly incorporates rich medical knowledge graph into the topic transition in dialogue management, and makes it cooperative with natural language understanding and natural language generation. A novel Knowledge-routed Deep Q-network (KR-DQN) is introduced to manage topic transitions, which integrates a relational refinement branch for encoding relations among different symptoms and symptomdisease pairs, and a knowledge-routed graph branch for topic decision-making. Extensive experiments on a public medical dialogue dataset show our KR-DS significantly beats stateof-the-art methods (by more than 8% in diagnosis accuracy). We further show the superiority of our KR-DS on a newly collected medical dialogue system dataset, which is more challenging retaining original self-reports and conversational data between patients and doctors.


Sign in / Sign up

Export Citation Format

Share Document