scholarly journals Bilateral Personalized Dialogue Generation with Contrastive Learning

Author(s):  
Bin Li ◽  
Hanjun Deng

Abstract Generating personalized responses is one of the major challenges in natural human-robot interaction. Current researches in this field mainly focus on generating responses consistent with the robot’s pre-assigned persona, while ignoring the user’s persona. Such responses may be inappropriate or even offensive, which may lead to the bad user experience. Therefore, we propose a Bilateral Personalized Dialogue Generation (BPDG) method for dyadic conversation, which integrates user and robot personas into dialogue generation via designing a dynamic persona-aware fusion method. To bridge the gap between the learning objective function and evaluation metrics, the Conditional Mutual Information Maximum (CMIM) criterion is adopted with contrastive learning to select the proper response from the generated candidates. Moreover, a bilateral persona accuracy metric is designed to measure the degree of bilateral personalization. Experimental results demonstrate that, compared with several state-of-the-art methods, the final results of the proposed method are more personalized and consistent with bilateral personas in terms of both automatic and manual evaluations.

Author(s):  
Xinmeng Li ◽  
Mamoun Alazab ◽  
Qian Li ◽  
Keping Yu ◽  
Quanjun Yin

AbstractKnowledge graph question answering is an important technology in intelligent human–robot interaction, which aims at automatically giving answer to human natural language question with the given knowledge graph. For the multi-relation question with higher variety and complexity, the tokens of the question have different priority for the triples selection in the reasoning steps. Most existing models take the question as a whole and ignore the priority information in it. To solve this problem, we propose question-aware memory network for multi-hop question answering, named QA2MN, to update the attention on question timely in the reasoning process. In addition, we incorporate graph context information into knowledge graph embedding model to increase the ability to represent entities and relations. We use it to initialize the QA2MN model and fine-tune it in the training process. We evaluate QA2MN on PathQuestion and WorldCup2014, two representative datasets for complex multi-hop question answering. The result demonstrates that QA2MN achieves state-of-the-art Hits@1 accuracy on the two datasets, which validates the effectiveness of our model.


2020 ◽  
Vol 12 (1) ◽  
pp. 58-73
Author(s):  
Sofia Thunberg ◽  
Tom Ziemke

AbstractInteraction between humans and robots will benefit if people have at least a rough mental model of what a robot knows about the world and what it plans to do. But how do we design human-robot interactions to facilitate this? Previous research has shown that one can change people’s mental models of robots by manipulating the robots’ physical appearance. However, this has mostly not been done in a user-centred way, i.e. without a focus on what users need and want. Starting from theories of how humans form and adapt mental models of others, we investigated how the participatory design method, PICTIVE, can be used to generate design ideas about how a humanoid robot could communicate. Five participants went through three phases based on eight scenarios from the state-of-the-art tasks in the RoboCup@Home social robotics competition. The results indicate that participatory design can be a suitable method to generate design concepts for robots’ communication in human-robot interaction.


AI & Society ◽  
2021 ◽  
Author(s):  
Nora Fronemann ◽  
Kathrin Pollmann ◽  
Wulf Loh

AbstractTo integrate social robots in real-life contexts, it is crucial that they are accepted by the users. Acceptance is not only related to the functionality of the robot but also strongly depends on how the user experiences the interaction. Established design principles from usability and user experience research can be applied to the realm of human–robot interaction, to design robot behavior for the comfort and well-being of the user. Focusing the design on these aspects alone, however, comes with certain ethical challenges, especially regarding the user’s privacy and autonomy. Based on an example scenario of human–robot interaction in elder care, this paper discusses how established design principles can be used in social robotic design. It then juxtaposes these with ethical considerations such as privacy and user autonomy. Combining user experience and ethical perspectives, we propose adjustments to the original design principles and canvass our own design recommendations for a positive and ethically acceptable social human–robot interaction design. In doing so, we show that positive user experience and ethical design may be sometimes at odds, but can be reconciled in many cases, if designers are willing to adjust and amend time-tested design principles.


Author(s):  
J. Lindblom ◽  
B. Alenljung

A fundamental challenge of human interaction with socially interactive robots, compared to other interactive products, comes from them being embodied. The embodied nature of social robots questions to what degree humans can interact ‘naturally' with robots, and what impact the interaction quality has on the user experience (UX). UX is fundamentally about emotions that arise and form in humans through the use of technology in a particular situation. This chapter aims to contribute to the field of human-robot interaction (HRI) by addressing, in further detail, the role and relevance of embodied cognition for human social interaction, and consequently what role embodiment can play in HRI, especially for socially interactive robots. Furthermore, some challenges for socially embodied interaction between humans and socially interactive robots are outlined and possible directions for future research are presented. It is concluded that the body is of crucial importance in understanding emotion and cognition in general, and, in particular, for a positive user experience to emerge when interacting with socially interactive robots.


2020 ◽  
Vol 10 (24) ◽  
pp. 8871
Author(s):  
Kaisheng Yang ◽  
Guilin Yang ◽  
Chi Zhang ◽  
Chinyin Chen ◽  
Tianjiang Zheng ◽  
...  

Inspired by the structure of human arms, a modular cable-driven human-like robotic arm (CHRA) is developed for safe human–robot interaction. Due to the unilateral driving properties of the cables, the CHRA is redundantly actuated and its stiffness can be adjusted by regulating the cable tensions. Since the trajectory of the 3-DOF joint module (3DJM) of the CHRA is a curve on Lie group SO(3), an enhanced stiffness model of the 3DJM is established by the covariant derivative of the load to the displacement on SO(3). In this paper, we focus on analyzing the how cable tension distribution problem oriented the enhanced stiffness of the 3DJM of the CHRA for stiffness adjustment. Due to the complexity of the enhanced stiffness model, it is difficult to solve the cable tensions from the desired stiffness analytically. The problem of stiffness-oriented cable tension distribution (SCTD) is formulated as a nonlinear optimization model. The optimization model is simplified using the symmetry of the enhanced stiffness model, the rank of the Jacobian matrix and the equilibrium equation of the 3DJM. Since the objective function is too complicated to compute the gradient, a method based on the genetic algorithm is proposed for solving this optimization problem, which only utilizes the objective function values. A comprehensive simulation is carried out to validate the effectiveness of the proposed method.


Author(s):  
Youdi Li ◽  
◽  
Wei Fen Hsieh ◽  
Eri Sato-Shimokawara ◽  
Toru Yamaguchi

In our daily life, it is inevitable to confront the condition which we feel confident or unconfident. Under these conditions, we might have different expressions and responses. Not to mention under the situation when a human communicates with a robot. It is necessary for robots to behave in various styles to show adaptive confidence degree, for example, in previous work, when the robot made mistakes during the interaction, different certainty expression styles have shown influence on humans’ truthfulness and acceptance. On the other hand, when human feel uncertain on the robot’s utterance, the approach of how the robot recognizes human’s uncertainty is crucial. However, relative researches are still scarce and ignore individual characteristics. In current study, we designed an experiment to obtain human verbal and non-verbal features under certain and uncertain condition. From the certain/uncertain answer experiment, we extracted the head movement and voice factors as features to investigate if we can classify these features correctly. From the result, we have found that different people had distinct features to show different certainty degree but some participants might have a similar pattern considering their relatively close psychological feature value. We aim to explore different individuals’ certainty expression patterns because it can not only facilitate humans’ confidence status detection but also is expected to be utilized on robot side to give the proper response adaptively and thus spice up the Human-Robot Interaction.


Sign in / Sign up

Export Citation Format

Share Document