scholarly journals Use and usability of software verification methods to detect behaviour interference when teaching an assistive home companion robot: A proof-of-concept study

2021 ◽  
Vol 12 (1) ◽  
pp. 402-422
Author(s):  
Kheng Lee Koay ◽  
Matt Webster ◽  
Clare Dixon ◽  
Paul Gainer ◽  
Dag Syrdal ◽  
...  

Abstract When studying the use of assistive robots in home environments, and especially how such robots can be personalised to meet the needs of the resident, key concerns are issues related to behaviour verification, behaviour interference and safety. Here, personalisation refers to the teaching of new robot behaviours by both technical and non-technical end users. In this article, we consider the issue of behaviour interference caused by situations where newly taught robot behaviours may affect or be affected by existing behaviours and thus, those behaviours will not or might not ever be executed. We focus in particular on how such situations can be detected and presented to the user. We describe the human–robot behaviour teaching system that we developed as well as the formal behaviour checking methods used. The online use of behaviour checking is demonstrated, based on static analysis of behaviours during the operation of the robot, and evaluated in a user study. We conducted a proof-of-concept human–robot interaction study with an autonomous, multi-purpose robot operating within a smart home environment. Twenty participants individually taught the robot behaviours according to instructions they were given, some of which caused interference with other behaviours. A mechanism for detecting behaviour interference provided feedback to participants and suggestions on how to resolve those conflicts. We assessed the participants’ views on detected interference as reported by the behaviour teaching system. Results indicate that interference warnings given to participants during teaching provoked an understanding of the issue. We did not find a significant influence of participants’ technical background. These results highlight a promising path towards verification and validation of assistive home companion robots that allow end-user personalisation.

2021 ◽  
Vol 12 ◽  
Author(s):  
Gregoire Pointeau ◽  
Solène Mirliaz ◽  
Anne-Laure Mealier ◽  
Peter Ford Dominey

How do people learn to talk about the causal and temporal relations between events, and the motivation behind why people do what they do? The narrative practice hypothesis of Hutto and Gallagher holds that children are exposed to narratives that provide training for understanding and expressing reasons for why people behave as they do. In this context, we have recently developed a model of narrative processing where a structured model of the developing situation (the situation model) is built up from experienced events, and enriched by sentences in a narrative that describe event meanings. The main interest is to develop a proof of concept for how narrative can be used to structure, organize and describe experience. Narrative sentences describe events, and they also define temporal and causal relations between events. These relations are specified by a class of narrative function words, including “because, before, after, first, finally.” The current research develops a proof of concept that by observing how people describe social events, a developmental robotic system can begin to acquire early knowledge of how to explain the reasons for events. We collect data from naïve subjects who use narrative function words to describe simple scenes of human-robot interaction, and then employ algorithms for extracting the statistical structure of how narrative function words link events in the situation model. By using these statistical regularities, the robot can thus learn from human experience about how to properly employ in question-answering dialogues with the human, and in generating canonical narratives for new experiences. The behavior of the system is demonstrated over several behavioral interactions, and associated narrative interaction sessions, while a more formal extended evaluation and user study will be the subject of future research. Clearly this is far removed from the power of the full blown narrative practice capability, but it provides a first step in the development of an experimental infrastructure for the study of socially situated narrative practice in human-robot interaction.


2021 ◽  
Author(s):  
Lauren Dwyer

Anxiety has a lifetime prevalence of 31% of Canadians (Katzman et al. 2014). In Canada, psychological services are only covered by provincial health insurance if the psychologist is employed in the public sector; this means long wait times in the public system or expensive private coverage (Canadian Psychological Association). Currently, social robots and Socially Assistive Robots (SAR) are used in the treatment of elderly individuals in nursing homes, as well as children with autism (Feil-Seifer & Matarić, 2011; Tapus et al., 2012). The following MRP is the first step in a long-term project that will contend with the issues faced by individuals with anxiety using a combined communications, social robotics, and mental health approach to develop an anxiety specific socially assistive robot companion. The focus of this MRP is the development of a communication model that includes three core aspects of a social robot companion: Human-Robot Interaction (HRI), anxiety disorders, and technical design. The model I am developing will consist of a series of suggestions for the robot that could be implemented in a long-term study. The model will include suggestions towards the design, communication means, and technical requirements, as well as a model for evaluating the robot from a Human-Robot- Interaction perspective. This will be done through an evaluation of three robots, Sphero’s BB-8 App Enabled Droid, Aldebaran’s Nao, and the Spin Master Zoomer robot. Evaluation measures include modified versions of Shneiderman’s (1992) evaluation of human-factors goals, Feil-Seifer et al.’s (2007) SAR evaluative questions, prompts for the description of both the communication methods and the physical characteristics, and a record of the emotional response of the user when interacting with the robot.


2021 ◽  
Author(s):  
Lauren Dwyer

Anxiety has a lifetime prevalence of 31% of Canadians (Katzman et al. 2014). In Canada, psychological services are only covered by provincial health insurance if the psychologist is employed in the public sector; this means long wait times in the public system or expensive private coverage (Canadian Psychological Association). Currently, social robots and Socially Assistive Robots (SAR) are used in the treatment of elderly individuals in nursing homes, as well as children with autism (Feil-Seifer & Matarić, 2011; Tapus et al., 2012). The following MRP is the first step in a long-term project that will contend with the issues faced by individuals with anxiety using a combined communications, social robotics, and mental health approach to develop an anxiety specific socially assistive robot companion. The focus of this MRP is the development of a communication model that includes three core aspects of a social robot companion: Human-Robot Interaction (HRI), anxiety disorders, and technical design. The model I am developing will consist of a series of suggestions for the robot that could be implemented in a long-term study. The model will include suggestions towards the design, communication means, and technical requirements, as well as a model for evaluating the robot from a Human-Robot- Interaction perspective. This will be done through an evaluation of three robots, Sphero’s BB-8 App Enabled Droid, Aldebaran’s Nao, and the Spin Master Zoomer robot. Evaluation measures include modified versions of Shneiderman’s (1992) evaluation of human-factors goals, Feil-Seifer et al.’s (2007) SAR evaluative questions, prompts for the description of both the communication methods and the physical characteristics, and a record of the emotional response of the user when interacting with the robot.


Information ◽  
2020 ◽  
Vol 11 (2) ◽  
pp. 112
Author(s):  
Marit Hagens ◽  
Serge Thill

Perfect information about an environment allows a robot to plan its actions optimally, but often requires significant investments into sensors and possibly infrastructure. In applications relevant to human–robot interaction, the environment is by definition dynamic and events close to the robot may be more relevant than distal ones. This suggests a non-trivial relationship between sensory sophistication on one hand, and task performance on the other. In this paper, we investigate this relationship in a simulated crowd navigation task. We use three different environments with unique characteristics that a crowd navigating robot might encounter and explore how the robot’s sensor range correlates with performance in the navigation task. We find diminishing returns of increased range in our particular case, suggesting that task performance and sensory sophistication might follow non-trivial relationships and that increased sophistication on the sensor side does not necessarily equal a corresponding increase in performance. Although this result is a simple proof of concept, it illustrates the benefit of exploring the consequences of different hardware designs—rather than merely algorithmic choices—in simulation first. We also find surprisingly good performance in the navigation task, including a low number of collisions with simulated human agents, using a relatively simple A*/NavMesh-based navigation strategy, which suggests that navigation strategies for robots in crowds need not always be sophisticated.


2021 ◽  
Vol 5 (11) ◽  
pp. 71
Author(s):  
Ela Liberman-Pincu ◽  
Amit David ◽  
Vardit Sarne-Fleischmann ◽  
Yael Edan ◽  
Tal Oron-Gilad

This study examines the effect of a COVID-19 Officer Robot (COR) on passersby compliance and the effects of its minor design manipulations on human–robot interaction. A robotic application was developed to ensure participants entering a public building comply with COVID restrictions of a green pass and wearing a face mask. The participants’ attitudes toward the robot and their perception of its authoritativeness were explored with video and questionnaires data. Thematic analysis was used to define unique behaviors related to human–COR interaction. Direct and extended interactions with minor design manipulation of the COR were evaluated in a public scenario setting. The results demonstrate that even minor design manipulations may influence users’ attitudes toward officer robots. The outcomes of this research can support manufacturers in rapidly adjusting their robots to new domains and tasks and guide future designs of authoritative socially assistive robots (SARs).


2019 ◽  
Vol 374 (1771) ◽  
pp. 20180433 ◽  
Author(s):  
Emily C. Collins

This opinion paper discusses how human–robot interaction (HRI) methodologies can be robustly developed by drawing on insights from fields outside of HRI that explore human–other interactions. The paper presents a framework that draws parallels between HRIs, and human–human, human–animal and human–object interaction literature, by considering the morphology and use of a robot to aid the development of robust HRI methodologies. The paper then briefly presents some novel empirical work as proof of concept to exemplify how the framework can help researchers define the mechanism of effect taking place within specific HRIs. The empirical work draws on known mechanisms of effect in animal-assisted therapy, and behavioural observations of touch patterns and their relation to individual differences in caring and attachment styles, and details how this trans-disciplinary approach to HRI methodology development was used to explore how an interaction with an animal-like robot was impacting a user. In doing so, this opinion piece outlines how useful objective, psychological measures of social cognition can be for deepening our understanding of HRI, and developing richer HRI methodologies, which take us away from questions that simply ask ‘Is this a good robot?’, and closer towards questions that ask ‘What mechanism of effect is occurring here, through which effective HRI is being performed?’ This paper further proposes that in using trans-disciplinary methodologies, experimental HRI can also be used to study human social cognition in and of itself. This article is part of the theme issue ‘From social brains to social robots: applying neurocognitive insights to human–robot interaction’.


Author(s):  
Wendy A. Rogers ◽  
Travis Kadylak ◽  
Megan A. Bayles

Objective We reviewed human–robot interaction (HRI) participatory design (PD) research with older adults. The goal was to identify methods used, determine their value for design of robots with older adults, and provide guidance for best practices. Background Assistive robots may promote aging-in-place and quality of life for older adults. However, the robots must be designed to meet older adults’ specific needs and preferences. PD and other user-centered methods may be used to engage older adults in the robot development process to accommodate their needs and preferences and to assure usability of emergent assistive robots. Method This targeted review of HRI PD studies with older adults draws on a detailed review of 26 articles. Our assessment focused on the HRI methods and their utility for use with older adults who have a range of needs and capabilities. Results Our review highlighted the importance of using mixed methods and including multiple stakeholders throughout the design process. These approaches can encourage mutual learning (to improve design by developers and to increase acceptance by users). We identified key phases used in HRI PD workshops (e.g., initial interview phase, series of focus groups phase, and presentation phase). These approaches can provide inspiration for future efforts. Conclusion HRI PD strategies can support designers in developing assistive robots that meet older adults’ needs, capabilities, and preferences to promote acceptance. More HRI research is needed to understand potential implications for aging-in-place. PD methods provide a promising approach.


Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3033
Author(s):  
Soheil Keshmiri ◽  
Masahiro Shiomi ◽  
Hidenobu Sumioka ◽  
Takashi Minato ◽  
Hiroshi Ishiguro

Touch plays a crucial role in humans’ nonverbal social and affective communication. It then comes as no surprise to observe a considerable effort that has been placed on devising methodologies for automated touch classification. For instance, such an ability allows for the use of smart touch sensors in such real-life application domains as socially-assistive robots and embodied telecommunication. In fact, touch classification literature represents an undeniably progressive result. However, these results are limited in two important ways. First, they are mostly based on overall (i.e., average) accuracy of different classifiers. As a result, they fall short in providing an insight on performance of these approaches as per different types of touch. Second, they do not consider the same type of touch with different level of strength (e.g., gentle versus strong touch). This is certainly an important factor that deserves investigating since the intensity of a touch can utterly transform its meaning (e.g., from an affectionate gesture to a sign of punishment). The current study provides a preliminary investigation of these shortcomings by considering the accuracy of a number of classifiers for both, within- (i.e., same type of touch with differing strengths) and between-touch (i.e., different types of touch) classifications. Our results help verify the strength and shortcoming of different machine learning algorithms for touch classification. They also highlight some of the challenges whose solution concepts can pave the path for integration of touch sensors in such application domains as human–robot interaction (HRI).


Sign in / Sign up

Export Citation Format

Share Document