scholarly journals A narrative approach to human-robot interaction prototyping for companion robots

2020 ◽  
Vol 11 (1) ◽  
pp. 66-85
Author(s):  
Kheng Lee Koay ◽  
Dag Sverre Syrdal ◽  
Kerstin Dautenhahn ◽  
Michael L. Walters

AbstractThis paper presents a proof of concept prototype study for domestic home robot companions, using a narrative-based methodology based on the principles of immersive engagement and fictional enquiry, creating scenarios which are inter-connected through a coherent narrative arc, to encourage participant immersion within a realistic setting. The aim was to ground human interactions with this technology in a coherent, meaningful experience. Nine participants interacted with a robotic agent in a smart home environment twice a week over a month, with each interaction framed within a greater narrative arc. Participant responses, both to the scenarios and the robotic agents used within them are discussed, suggesting that the prototyping methodology was successful in conveying a meaningful interaction experience.

Author(s):  
Farshid Amirabdollahian ◽  
Rieks op den Akker ◽  
Sandra Bedaf ◽  
Richard Bormann ◽  
Heather Draper ◽  
...  

AbstractA new stream of research and development responds to changes in life expectancy across the world. It includes technologies which enhance well-being of individuals, specifically for older people. The ACCOMPANY project focuses on home companion technologies and issues surrounding technology development for assistive purposes. The project responds to some overlooked aspects of technology design, divided into multiple areas such as empathic and social human-robot interaction, robot learning and memory visualisation, and monitoring persons’ activities at home. To bring these aspects together, a dedicated task is identified to ensure technological integration of these multiple approaches on an existing robotic platform, Care-O-Bot®3 in the context of a smart-home environment utilising a multitude of sensor arrays. Formative and summative evaluation cycles are then used to assess the emerging prototype towards identifying acceptable behaviours and roles for the robot, for example role as a butler or a trainer, while also comparing user requirements to achieved progress. In a novel approach, the project considers ethical concerns and by highlighting principles such as autonomy, independence, enablement, safety and privacy, it embarks on providing a discussion medium where user views on these principles and the existing tension between some of these principles, for example tension between privacy and autonomy over safety, can be captured and considered in design cycles and throughout project developments.


2018 ◽  
Vol 9 (1) ◽  
pp. 221-234 ◽  
Author(s):  
João Avelino ◽  
Tiago Paulino ◽  
Carlos Cardoso ◽  
Ricardo Nunes ◽  
Plinio Moreno ◽  
...  

Abstract Handshaking is a fundamental part of human physical interaction that is transversal to various cultural backgrounds. It is also a very challenging task in the field of Physical Human-Robot Interaction (pHRI), requiring compliant force control in order to plan the arm’s motion and for a confident, but at the same time pleasant grasp of the human user’s hand. In this paper,we focus on the study of the hand grip strength for comfortable handshakes and perform three sets of physical interaction experiments between twenty human subjects in the first experiment, thirty-five human subjects in the second one, and thirty-eight human subjects in the third one. Tests are made with a social robot whose hands are instrumented with tactile sensors that provide skin-like sensation. From these experiments, we: (i) learn the preferred grip closure according to each user group; (ii) analyze the tactile feedback provided by the sensors for each closure; (iii) develop and evaluate the hand grip controller based on previous data. In addition to the robot-human interactions, we also learn about the robot executed handshake interactions with inanimate objects, in order to detect if it is shaking hands with a human or an inanimate object. This work adds physical human-robot interaction to the repertory of social skills of our robot, fulfilling a demand previously identified by many users of the robot.


2007 ◽  
Vol 8 (1) ◽  
pp. 53-81 ◽  
Author(s):  
Luís Seabra Lopes ◽  
Aneesh Chauhan

This paper addresses word learning for human–robot interaction. The focus is on making a robotic agent aware of its surroundings, by having it learn the names of the objects it can find. The human user, acting as instructor, can help the robotic agent ground the words used to refer to those objects. A lifelong learning system, based on one-class learning, was developed (OCLL). This system is incremental and evolves with the presentation of any new word, which acts as a class to the robot, relying on instructor feedback. A novel experimental evaluation methodology, that takes into account the open-ended nature of word learning, is proposed and applied. This methodology is based on the realization that a robot’s vocabulary will be limited by its discriminatory capacity which, in turn, depends on its sensors and perceptual capabilities. The results indicate that the robot’s representations are capable of incrementally evolving by correcting class descriptions, based on instructor feedback to classification results. In successive experiments, it was possible for the robot to learn between 6 and 12 names of real-world office objects. Although these results are comparable to those obtained by other authors, there is a need to scale-up. The limitations of the method are discussed and potential directions for improvement are pointed out.


2021 ◽  
Vol 12 (1) ◽  
pp. 402-422
Author(s):  
Kheng Lee Koay ◽  
Matt Webster ◽  
Clare Dixon ◽  
Paul Gainer ◽  
Dag Syrdal ◽  
...  

Abstract When studying the use of assistive robots in home environments, and especially how such robots can be personalised to meet the needs of the resident, key concerns are issues related to behaviour verification, behaviour interference and safety. Here, personalisation refers to the teaching of new robot behaviours by both technical and non-technical end users. In this article, we consider the issue of behaviour interference caused by situations where newly taught robot behaviours may affect or be affected by existing behaviours and thus, those behaviours will not or might not ever be executed. We focus in particular on how such situations can be detected and presented to the user. We describe the human–robot behaviour teaching system that we developed as well as the formal behaviour checking methods used. The online use of behaviour checking is demonstrated, based on static analysis of behaviours during the operation of the robot, and evaluated in a user study. We conducted a proof-of-concept human–robot interaction study with an autonomous, multi-purpose robot operating within a smart home environment. Twenty participants individually taught the robot behaviours according to instructions they were given, some of which caused interference with other behaviours. A mechanism for detecting behaviour interference provided feedback to participants and suggestions on how to resolve those conflicts. We assessed the participants’ views on detected interference as reported by the behaviour teaching system. Results indicate that interference warnings given to participants during teaching provoked an understanding of the issue. We did not find a significant influence of participants’ technical background. These results highlight a promising path towards verification and validation of assistive home companion robots that allow end-user personalisation.


Author(s):  
Tracy Sanders ◽  
Alexandra Kaplan ◽  
Ryan Koch ◽  
Michael Schwartz ◽  
P. A. Hancock

Objective: To understand the influence of trust on use choice in human-robot interaction via experimental investigation. Background: The general assumption that trusting a robot leads to using that robot has been previously identified, often by asking participants to choose between manually completing a task or using an automated aid. Our work further evaluates the relationship between trust and use choice and examines factors impacting choice. Method: An experiment was conducted wherein participants rated a robot on a trust scale, then made decisions about whether to use that robotic agent or a human agent to complete a task. Participants provided explicit reasoning for their choices. Results: While we found statistical support for the “trust leads to use” relationship, qualitative results indicate other factors are important as well. Conclusion: Results indicated that while trust leads to use, use is also heavily influenced by the specific task at hand. Users more often chose a robot for a dangerous task where loss of life is likely, citing safety as their primary concern. Conversely, users chose humans for the mundane warehouse task, mainly citing financial reasons, specifically fear of job and income loss for the human worker. Application: Understanding the factors driving use choice is key to appropriate interaction in the field of human-robot teaming.


2014 ◽  
Vol 11 (04) ◽  
pp. 1442005 ◽  
Author(s):  
Youngho Lee ◽  
Young Jae Ryoo ◽  
Jongmyung Choi

With the development of computing technology, robots are now popular in our daily life. Human–robot interaction is not restricted to a direct communication between them. The communication could include various different human to human interactions. In this paper, we present a framework for enhancing the interaction among human–robot-environments. The proposed framework is composed of a robot part, a user part, and the DigiLog space. To evaluate the proposed framework, we applied the framework into a real-time remote robot-control platform in the smart DigiLog space. We are implementing real time controlling and monitoring of a robot by using one smart phone as the robot brain and the other smart phone as the remote controller.


2019 ◽  
Author(s):  
Cecilia Roselli ◽  
Francesca Ciardo ◽  
Agnieszka Wykowska

In near future, robots will become a fundamental part of our daily life; therefore, it appears crucial to investigate how they can successfully interact with humans. Since several studies already pointed out that a robotic agent can influence human’s cognitive mechanisms such as decision-making and joint attention, we focus on Sense of Agency (SoA). To this aim, we employed the Intentional Binding (IB) task to implicitly assess SoA in human-robot interaction (HRI). Participants were asked to perform an IB task alone (Individual condition) or with the Cozmo robot (Social condition). In the Social condition, participants were free to decide whether they wanted to let Cozmo press. Results showed that participants performed the action significantly more often than Cozmo. Moreover, participants were more precise in reporting the occurrence of a self-made action when Cozmo was also in charge of performing the task. However, this improvement in evaluating self-performance corresponded to a reduction in SoA. In conclusion, the present study highlights the double effect of robots as social companions. Indeed, the social presence of the robot leads to a better evaluation of self-generated actions and, at the same time, to a reduction of SoA.


2007 ◽  
Vol 8 (3) ◽  
pp. 411-422
Author(s):  
Billy Lee

Studies of human–human interactions indicate that relational dimensions, which are largely nonverbal, include intimacy/involvement, status/control, and emotional valence. This paper devises codes from a study of couples and strangers which may be behavior-mapped on to next generation android bodies. The codes provide act specifications for a possible benchmark of nonverbal intimacy in human–robot interaction. The appropriateness of emotionally intimate behaviors for androids is considered. The design and utility of the android counselor/psychotherapist is explored, whose body is equipped with semi-autonomous visceral and behavioral capacities for ‘doing intimacy.’


2013 ◽  
Vol 10 (01) ◽  
pp. 1350010 ◽  
Author(s):  
GINEVRA CASTELLANO ◽  
IOLANDA LEITE ◽  
ANDRÉ PEREIRA ◽  
CARLOS MARTINHO ◽  
ANA PAIVA ◽  
...  

Affect recognition for socially perceptive robots relies on representative data. While many of the existing affective corpora and databases contain posed and decontextualized affective expressions, affect resources for designing an affect recognition system in naturalistic human–robot interaction (HRI) must include context-rich expressions that emerge in the same scenario of the final application. In this paper, we propose a context-based approach to the collection and modeling of representative data for building an affect-sensitive robotic game companion. To illustrate our approach we present the key features of the Inter-ACT (INTEracting with Robots–Affect Context Task) corpus, an affective and contextually rich multimodal video corpus containing affective expressions of children playing chess with an iCat robot. We show how this corpus can be successfully used to train a context-sensitive affect recognition system (a valence detector) for a robotic game companion. Finally, we demonstrate how the integration of the affect recognition system in a modular platform for adaptive HRI makes the interaction with the robot more engaging.


AI & Society ◽  
2020 ◽  
Vol 35 (4) ◽  
pp. 885-893 ◽  
Author(s):  
Daniel W. Tigard ◽  
Niël H. Conradie ◽  
Saskia K. Nagel

Abstract Robotic and artificially intelligent (AI) systems are becoming prevalent in our day-to-day lives. As human interaction is increasingly replaced by human–computer and human–robot interaction (HCI and HRI), we occasionally speak and act as though we are blaming or praising various technological devices. While such responses may arise naturally, they are still unusual. Indeed, for some authors, it is the programmers or users—and not the system itself—that we properly hold responsible in these cases. Furthermore, some argue that since directing blame or praise at technology itself is unfitting, designing systems in ways that encourage such practices can only exacerbate the problem. On the other hand, there may be good moral reasons to continue engaging in our natural practices, even in cases involving AI systems or robots. In particular, daily interactions with technology may stand to impact the development of our moral practices in human-to-human interactions. In this paper, we put forward an empirically grounded argument in favor of some technologies being designed for social responsiveness. Although our usual practices will likely undergo adjustments in response to innovative technologies, some systems which we encounter can be designed to accommodate our natural moral responses. In short, fostering HCI and HRI that sustains and promotes our natural moral practices calls for a co-developmental process with some AI and robotic technologies.


Sign in / Sign up

Export Citation Format

Share Document