Nonverbal intimacy as a benchmark for human–robot interaction

2007 ◽  
Vol 8 (3) ◽  
pp. 411-422
Author(s):  
Billy Lee

Studies of human–human interactions indicate that relational dimensions, which are largely nonverbal, include intimacy/involvement, status/control, and emotional valence. This paper devises codes from a study of couples and strangers which may be behavior-mapped on to next generation android bodies. The codes provide act specifications for a possible benchmark of nonverbal intimacy in human–robot interaction. The appropriateness of emotionally intimate behaviors for androids is considered. The design and utility of the android counselor/psychotherapist is explored, whose body is equipped with semi-autonomous visceral and behavioral capacities for ‘doing intimacy.’

2018 ◽  
Vol 9 (1) ◽  
pp. 221-234 ◽  
Author(s):  
João Avelino ◽  
Tiago Paulino ◽  
Carlos Cardoso ◽  
Ricardo Nunes ◽  
Plinio Moreno ◽  
...  

Abstract Handshaking is a fundamental part of human physical interaction that is transversal to various cultural backgrounds. It is also a very challenging task in the field of Physical Human-Robot Interaction (pHRI), requiring compliant force control in order to plan the arm’s motion and for a confident, but at the same time pleasant grasp of the human user’s hand. In this paper,we focus on the study of the hand grip strength for comfortable handshakes and perform three sets of physical interaction experiments between twenty human subjects in the first experiment, thirty-five human subjects in the second one, and thirty-eight human subjects in the third one. Tests are made with a social robot whose hands are instrumented with tactile sensors that provide skin-like sensation. From these experiments, we: (i) learn the preferred grip closure according to each user group; (ii) analyze the tactile feedback provided by the sensors for each closure; (iii) develop and evaluate the hand grip controller based on previous data. In addition to the robot-human interactions, we also learn about the robot executed handshake interactions with inanimate objects, in order to detect if it is shaking hands with a human or an inanimate object. This work adds physical human-robot interaction to the repertory of social skills of our robot, fulfilling a demand previously identified by many users of the robot.


Author(s):  
S. Lackey ◽  
D. Barber ◽  
L. Reinerman ◽  
N. I. Badler ◽  
I. Hudson

2014 ◽  
Vol 11 (04) ◽  
pp. 1442005 ◽  
Author(s):  
Youngho Lee ◽  
Young Jae Ryoo ◽  
Jongmyung Choi

With the development of computing technology, robots are now popular in our daily life. Human–robot interaction is not restricted to a direct communication between them. The communication could include various different human to human interactions. In this paper, we present a framework for enhancing the interaction among human–robot-environments. The proposed framework is composed of a robot part, a user part, and the DigiLog space. To evaluate the proposed framework, we applied the framework into a real-time remote robot-control platform in the smart DigiLog space. We are implementing real time controlling and monitoring of a robot by using one smart phone as the robot brain and the other smart phone as the remote controller.


2020 ◽  
Vol 11 (1) ◽  
pp. 66-85
Author(s):  
Kheng Lee Koay ◽  
Dag Sverre Syrdal ◽  
Kerstin Dautenhahn ◽  
Michael L. Walters

AbstractThis paper presents a proof of concept prototype study for domestic home robot companions, using a narrative-based methodology based on the principles of immersive engagement and fictional enquiry, creating scenarios which are inter-connected through a coherent narrative arc, to encourage participant immersion within a realistic setting. The aim was to ground human interactions with this technology in a coherent, meaningful experience. Nine participants interacted with a robotic agent in a smart home environment twice a week over a month, with each interaction framed within a greater narrative arc. Participant responses, both to the scenarios and the robotic agents used within them are discussed, suggesting that the prototyping methodology was successful in conveying a meaningful interaction experience.


AI & Society ◽  
2020 ◽  
Vol 35 (4) ◽  
pp. 885-893 ◽  
Author(s):  
Daniel W. Tigard ◽  
Niël H. Conradie ◽  
Saskia K. Nagel

Abstract Robotic and artificially intelligent (AI) systems are becoming prevalent in our day-to-day lives. As human interaction is increasingly replaced by human–computer and human–robot interaction (HCI and HRI), we occasionally speak and act as though we are blaming or praising various technological devices. While such responses may arise naturally, they are still unusual. Indeed, for some authors, it is the programmers or users—and not the system itself—that we properly hold responsible in these cases. Furthermore, some argue that since directing blame or praise at technology itself is unfitting, designing systems in ways that encourage such practices can only exacerbate the problem. On the other hand, there may be good moral reasons to continue engaging in our natural practices, even in cases involving AI systems or robots. In particular, daily interactions with technology may stand to impact the development of our moral practices in human-to-human interactions. In this paper, we put forward an empirically grounded argument in favor of some technologies being designed for social responsiveness. Although our usual practices will likely undergo adjustments in response to innovative technologies, some systems which we encounter can be designed to accommodate our natural moral responses. In short, fostering HCI and HRI that sustains and promotes our natural moral practices calls for a co-developmental process with some AI and robotic technologies.


2020 ◽  
Author(s):  
Bishakha Chaudhury ◽  
Ruud Hortensius ◽  
Martin Hoffmann ◽  
Emily S. Cross

As research examining human-robot interaction moves from the laboratory to the real world, studies seeking to examine how people interact with robots face the question of which robotic platform to employ to collect data in situ. To facilitate the study of a broad range of individuals, from children to clinical populations, across diverse environments, from homes to schools, a robust, reproducible, low-cost and easy-to-use robotic platform is needed. Here, we describe how a commercially available off-the-shelf robot, Cozmo, can be used to study embodied human-robot interactions in a wide variety of settings, including the user’s home. In this Tutorial, we describe the steps required to use this affordable and flexible platform for longitudinal human-robot interaction studies. First, we outline the technical specifications and requirements of this platform and accessories. We present findings from validation work we performed to map the behavioural repertoire of the Cozmo robot and introduce an accompanying interactive emotion classification tool to use with this robot. We then show how log files containing detailed data on the human-robot interaction can be collected and extracted. Finally, we detail the types of information that can be retrieved from these data. This low-cost robotic platform will provide the field with a variety of valuable new possibilities to study human-robot interactions within and beyond the research laboratory, which are user-driven and unconstrained in both time and place.


Author(s):  
Stephanie Lackey ◽  
Daniel Barber ◽  
Lauren Reinerman ◽  
Norman I. Badler ◽  
Irwin Hudson

2019 ◽  
Vol 39 (2-3) ◽  
pp. 233-249 ◽  
Author(s):  
Harold Soh ◽  
Yaqi Xie ◽  
Min Chen ◽  
David Hsu

Trust is essential in shaping human interactions with one another and with robots. In this article we investigate how human trust in robot capabilities transfers across multiple tasks. We present a human-subject study of two distinct task domains: a Fetch robot performing household tasks and a virtual reality simulation of an autonomous vehicle performing driving and parking maneuvers. The findings expand our understanding of trust and provide new predictive models of trust evolution and transfer via latent task representations: a rational Bayes model, a data-driven neural network model, and a hybrid model that combines the two. Experiments show that the proposed models outperform prevailing models when predicting trust over unseen tasks and users. These results suggest that (i) task-dependent functional trust models capture human trust in robot capabilities more accurately and (ii) trust transfer across tasks can be inferred to a good degree. The latter enables trust-mediated robot decision-making for fluent human–robot interaction in multi-task settings.


Sign in / Sign up

Export Citation Format

Share Document