Expression and Identification of Confidence Based on Individual Verbal and Non-Verbal Features in Human-Robot Interaction

Author(s):  
Youdi Li ◽  
◽  
Wei Fen Hsieh ◽  
Eri Sato-Shimokawara ◽  
Toru Yamaguchi

In our daily life, it is inevitable to confront the condition which we feel confident or unconfident. Under these conditions, we might have different expressions and responses. Not to mention under the situation when a human communicates with a robot. It is necessary for robots to behave in various styles to show adaptive confidence degree, for example, in previous work, when the robot made mistakes during the interaction, different certainty expression styles have shown influence on humans’ truthfulness and acceptance. On the other hand, when human feel uncertain on the robot’s utterance, the approach of how the robot recognizes human’s uncertainty is crucial. However, relative researches are still scarce and ignore individual characteristics. In current study, we designed an experiment to obtain human verbal and non-verbal features under certain and uncertain condition. From the certain/uncertain answer experiment, we extracted the head movement and voice factors as features to investigate if we can classify these features correctly. From the result, we have found that different people had distinct features to show different certainty degree but some participants might have a similar pattern considering their relatively close psychological feature value. We aim to explore different individuals’ certainty expression patterns because it can not only facilitate humans’ confidence status detection but also is expected to be utilized on robot side to give the proper response adaptively and thus spice up the Human-Robot Interaction.

Robotics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 68
Author(s):  
Lei Shi ◽  
Cosmin Copot ◽  
Steve Vanlanduit

In gaze-based Human-Robot Interaction (HRI), it is important to determine human visual intention for interacting with robots. One typical HRI interaction scenario is that a human selects an object by gaze and a robotic manipulator will pick up the object. In this work, we propose an approach, GazeEMD, that can be used to detect whether a human is looking at an object for HRI application. We use Earth Mover’s Distance (EMD) to measure the similarity between the hypothetical gazes at objects and the actual gazes. Then, the similarity score is used to determine if the human visual intention is on the object. We compare our approach with a fixation-based method and HitScan with a run length in the scenario of selecting daily objects by gaze. Our experimental results indicate that the GazeEMD approach has higher accuracy and is more robust to noises than the other approaches. Hence, the users can lessen cognitive load by using our approach in the real-world HRI scenario.


Information ◽  
2020 ◽  
Vol 11 (2) ◽  
pp. 112
Author(s):  
Marit Hagens ◽  
Serge Thill

Perfect information about an environment allows a robot to plan its actions optimally, but often requires significant investments into sensors and possibly infrastructure. In applications relevant to human–robot interaction, the environment is by definition dynamic and events close to the robot may be more relevant than distal ones. This suggests a non-trivial relationship between sensory sophistication on one hand, and task performance on the other. In this paper, we investigate this relationship in a simulated crowd navigation task. We use three different environments with unique characteristics that a crowd navigating robot might encounter and explore how the robot’s sensor range correlates with performance in the navigation task. We find diminishing returns of increased range in our particular case, suggesting that task performance and sensory sophistication might follow non-trivial relationships and that increased sophistication on the sensor side does not necessarily equal a corresponding increase in performance. Although this result is a simple proof of concept, it illustrates the benefit of exploring the consequences of different hardware designs—rather than merely algorithmic choices—in simulation first. We also find surprisingly good performance in the navigation task, including a low number of collisions with simulated human agents, using a relatively simple A*/NavMesh-based navigation strategy, which suggests that navigation strategies for robots in crowds need not always be sophisticated.


2020 ◽  
Vol 32 (1) ◽  
pp. 224-235
Author(s):  
Wei-Fen Hsieh ◽  
◽  
Eri Sato-Shimokawara ◽  
Toru Yamaguchi

In our daily conversation, we obtain considerable information from our interlocutor’s non-verbal behaviors, such as gaze and gestures. Several studies have shown that nonverbal messages are prominent factors in smoothing the process of human-robot interaction. Our previous studies have shown that not only a robot’s appearance but also its gestures, tone, and other nonverbal factors influence a person’s impression of it. The paper presented an analysis of the impressions made when human motions are implemented on a humanoid robot, and experiments were conducted to evaluate impressions made by robot expressions to analyze the sensations. The results showed the relation between robot expression patterns and human preferences. To further investigate biofeedback elicited by different robot styles of expression, a scenario-based experiment was done. The results revealed that people’s emotions can definitely be affected by robot behavior, and the robot’s way of expressing itself is what most influences whether or not it is perceived as friendly. The results show that it is potentially useful to combine our concept into a robot system to meet individual needs.


2018 ◽  
Vol 161 ◽  
pp. 01001 ◽  
Author(s):  
Karsten Berns ◽  
Zuhair Zafar

Human-machine interaction is a major challenge in the development of complex humanoid robots. In addition to verbal communication the use of non-verbal cues such as hand, arm and body gestures or mimics can improve the understanding of the intention of the robot. On the other hand, by perceiving such mechanisms of a human in a typical interaction scenario the humanoid robot can adapt its interaction skills in a better way. In this work, the perception system of two social robots, ROMAN and ROBIN of the RRLAB of the TU Kaiserslautern, is presented in the range of human-robot interaction.


2014 ◽  
Vol 11 (04) ◽  
pp. 1442005 ◽  
Author(s):  
Youngho Lee ◽  
Young Jae Ryoo ◽  
Jongmyung Choi

With the development of computing technology, robots are now popular in our daily life. Human–robot interaction is not restricted to a direct communication between them. The communication could include various different human to human interactions. In this paper, we present a framework for enhancing the interaction among human–robot-environments. The proposed framework is composed of a robot part, a user part, and the DigiLog space. To evaluate the proposed framework, we applied the framework into a real-time remote robot-control platform in the smart DigiLog space. We are implementing real time controlling and monitoring of a robot by using one smart phone as the robot brain and the other smart phone as the remote controller.


2021 ◽  
Vol 11 (5) ◽  
pp. 2358
Author(s):  
Mitsuhiko Kimoto ◽  
Takamasa Iio ◽  
Masahiro Shiomi ◽  
Katsunori Shimohara

This study proposes a robot conversation strategy involving speech and gestures to improve a robot’s indicated object recognition, i.e., the recognition of an object indicated by a human. Research conducted to improve the performance of indicated object recognition is divided into two main approaches: development and interactive. The development approach addresses the development of new devices or algorithms. Through human–robot interaction, the interactive approach improves the performance by decreasing the variability and the ambiguity of the references. Inspired by the findings of entrainment and entrainment inhibition, this study proposes a robot conversation strategy that utilizes the interactive approach. While entrainment is a phenomenon in which people unconsciously tend to mimic words and/or gestures of their interlocutor, entrainment inhibition is the opposite phenomenon in which people decrease the amount of information contained in their words and gestures when their interlocutor provides excess information. Based on these phenomena, we designed a robot conversation strategy that elicits clear references. We experimentally compared this strategy with the other interactive strategy in which a robot explicitly requests clarifications when a human refers to an object. We obtained the following findings: (1) The proposed strategy clarifies human references and improves indicated object recognition performance, and (2) the proposed strategy forms better impressions than the other interactive strategy that explicitly requests clarifications when people refer to objects.


Author(s):  
Ryosuke Tanaka ◽  
◽  
Jinseok Woo ◽  
Naoyuki Kubota

The research and development of robot partners have been actively conducted to support human daily life. Human-robot interaction is one of the important research field, in which verbal and nonverbal communication are essential elements for improving the interactions between humans and robots. Thus, the purpose of this research was to establish a method to adapt a human-robot interaction mechanism for robot partners to various situations. In the proposed system, the robot needs to analyze the gestures of humans to interact with them. Humans have the ability to interact according to dynamically changing environmental conditions. Therefore, when robots interact with a human, it is necessary for robots to interact appropriately by correctly judging the situation according to human gestures to carry out natural human-robot interaction. In this paper, we propose a constructive methodology on a system that enables nonverbal communication elements for human-robot interaction. The proposed method was validated through a series of experiments.


AI & Society ◽  
2020 ◽  
Vol 35 (4) ◽  
pp. 885-893 ◽  
Author(s):  
Daniel W. Tigard ◽  
Niël H. Conradie ◽  
Saskia K. Nagel

Abstract Robotic and artificially intelligent (AI) systems are becoming prevalent in our day-to-day lives. As human interaction is increasingly replaced by human–computer and human–robot interaction (HCI and HRI), we occasionally speak and act as though we are blaming or praising various technological devices. While such responses may arise naturally, they are still unusual. Indeed, for some authors, it is the programmers or users—and not the system itself—that we properly hold responsible in these cases. Furthermore, some argue that since directing blame or praise at technology itself is unfitting, designing systems in ways that encourage such practices can only exacerbate the problem. On the other hand, there may be good moral reasons to continue engaging in our natural practices, even in cases involving AI systems or robots. In particular, daily interactions with technology may stand to impact the development of our moral practices in human-to-human interactions. In this paper, we put forward an empirically grounded argument in favor of some technologies being designed for social responsiveness. Although our usual practices will likely undergo adjustments in response to innovative technologies, some systems which we encounter can be designed to accommodate our natural moral responses. In short, fostering HCI and HRI that sustains and promotes our natural moral practices calls for a co-developmental process with some AI and robotic technologies.


2022 ◽  
Author(s):  
Bin Li ◽  
Hanjun Deng

Abstract Generating personalized responses is one of the major challenges in natural human-robot interaction. Current researches in this field mainly focus on generating responses consistent with the robot’s pre-assigned persona, while ignoring the user’s persona. Such responses may be inappropriate or even offensive, which may lead to the bad user experience. Therefore, we propose a Bilateral Personalized Dialogue Generation (BPDG) method for dyadic conversation, which integrates user and robot personas into dialogue generation via designing a dynamic persona-aware fusion method. To bridge the gap between the learning objective function and evaluation metrics, the Conditional Mutual Information Maximum (CMIM) criterion is adopted with contrastive learning to select the proper response from the generated candidates. Moreover, a bilateral persona accuracy metric is designed to measure the degree of bilateral personalization. Experimental results demonstrate that, compared with several state-of-the-art methods, the final results of the proposed method are more personalized and consistent with bilateral personas in terms of both automatic and manual evaluations.


2021 ◽  
pp. 229-238
Author(s):  
Aurelie Clodic ◽  
Rachid Alami

AbstractJoint action in the sphere of human–human interrelations may be a model for human–robot interactions. Human–human interrelations are only possible when several prerequisites are met, inter alia: (1) that each agent has a representation within itself of its distinction from the other so that their respective tasks can be coordinated; (2) each agent attends to the same object, is aware of that fact, and the two sets of “attentions” are causally connected; and (3) each agent understands the other’s action as intentional. The authors explain how human–robot interaction can benefit from the same threefold pattern. In this context, two key problems emerge. First, how can a robot be programed to recognize its distinction from a human subject in the same space, to detect when a human agent is attending to something, to produce signals which exhibit their internal state and make decisions about the goal-directedness of the other’s actions such that the appropriate predictions can be made? Second, what must humans learn about robots so they are able to interact reliably with them in view of a shared goal? This dual process is here examined by reference to the laboratory case of a human and a robot who team up in building a stack with four blocks.


Sign in / Sign up

Export Citation Format

Share Document