Paladyn Journal of Behavioral Robotics
Latest Publications


TOTAL DOCUMENTS

246
(FIVE YEARS 96)

H-INDEX

15
(FIVE YEARS 5)

Published By De Gruyter Open Sp. Z O.O.

2081-4836, 2080-9778

2021 ◽  
Vol 12 (1) ◽  
pp. 356-378
Author(s):  
Gabriella Cortellessa ◽  
Riccardo De Benedictis ◽  
Francesca Fracasso ◽  
Andrea Orlandini ◽  
Alessandro Umbrico ◽  
...  

Abstract This article is a retrospective overview of work performed in the domain of Active Assisted Living over a span of almost 18 years. The authors have been creating and refining artificial intelligence (AI) and robotics solutions to support older adults in maintaining their independence and improving their quality of life. The goal of this article is to identify strong features and general lessons learned from those experiences and conceive guidelines and new research directions for future deployment, also relying on an analysis of similar research efforts. The work considers key points that have contributed to increase the success of the innovative solutions grounding them on known technology acceptance models. The analysis is presented with a threefold perspective: A Technological vision illustrates the characteristics of the support systems to operate in a real environment with continuity, robustness, and safety; a Socio-Health perspective highlights the role of experts in the socio-assistance domain to provide contextualized and personalized help based on actual people’s needs; finally, a Human dimension takes into account the personal aspects that influence the interaction with technology in the long term experience. The article promotes the crucial role of AI and robotics in ensuring intelligent and situated assistive behaviours. Finally, considering that the produced solutions are socio-technical systems, the article suggests a transdisciplinary approach in which different relevant disciplines merge together to have a complete, coordinated, and more informed vision of the problem.


2021 ◽  
Vol 12 (1) ◽  
pp. 402-422
Author(s):  
Kheng Lee Koay ◽  
Matt Webster ◽  
Clare Dixon ◽  
Paul Gainer ◽  
Dag Syrdal ◽  
...  

Abstract When studying the use of assistive robots in home environments, and especially how such robots can be personalised to meet the needs of the resident, key concerns are issues related to behaviour verification, behaviour interference and safety. Here, personalisation refers to the teaching of new robot behaviours by both technical and non-technical end users. In this article, we consider the issue of behaviour interference caused by situations where newly taught robot behaviours may affect or be affected by existing behaviours and thus, those behaviours will not or might not ever be executed. We focus in particular on how such situations can be detected and presented to the user. We describe the human–robot behaviour teaching system that we developed as well as the formal behaviour checking methods used. The online use of behaviour checking is demonstrated, based on static analysis of behaviours during the operation of the robot, and evaluated in a user study. We conducted a proof-of-concept human–robot interaction study with an autonomous, multi-purpose robot operating within a smart home environment. Twenty participants individually taught the robot behaviours according to instructions they were given, some of which caused interference with other behaviours. A mechanism for detecting behaviour interference provided feedback to participants and suggestions on how to resolve those conflicts. We assessed the participants’ views on detected interference as reported by the behaviour teaching system. Results indicate that interference warnings given to participants during teaching provoked an understanding of the issue. We did not find a significant influence of participants’ technical background. These results highlight a promising path towards verification and validation of assistive home companion robots that allow end-user personalisation.


2021 ◽  
Vol 12 (1) ◽  
pp. 310-335
Author(s):  
Selmer Bringsjord ◽  
Naveen Sundar Govindarajulu ◽  
Michael Giancola

Abstract Suppose an artificial agent a adj {a}_{\text{adj}} , as time unfolds, (i) receives from multiple artificial agents (which may, in turn, themselves have received from yet other such agents…) propositional content, and (ii) must solve an ethical problem on the basis of what it has received. How should a adj {a}_{\text{adj}} adjudicate what it has received in order to produce such a solution? We consider an environment infused with logicist artificial agents a 1 , a 2 , … , a n {a}_{1},{a}_{2},\ldots ,{a}_{n} that sense and report their findings to “adjudicator” agents who must solve ethical problems. (Many if not most of these agents may be robots.) In such an environment, inconsistency is a virtual guarantee: a adj {a}_{\text{adj}} may, for instance, receive a report from a 1 {a}_{1} that proposition ϕ \phi holds, then from a 2 {a}_{2} that ¬ ϕ \neg \phi holds, and then from a 3 {a}_{3} that neither ϕ \phi nor ¬ ϕ \neg \phi should be believed, but rather ψ \psi instead, at some level of likelihood. We further assume that agents receiving such incompatible reports will nonetheless sometimes simply need, before long, to make decisions on the basis of these reports, in order to try to solve ethical problems. We provide a solution to such a quandary: AI capable of adjudicating competing reports from subsidiary agents through time, and delivering to humans a rational, ethically correct (relative to underlying ethical principles) recommendation based upon such adjudication. To illuminate our solution, we anchor it to a particular scenario.


2021 ◽  
Vol 12 (1) ◽  
pp. 336-355
Author(s):  
Trenton Schulz ◽  
Rebekka Soma ◽  
Patrick Holthaus

Abstract Recovery procedures are targeted at correcting issues encountered by robots. What are people’s opinions of a robot during these recovery procedures? During an experiment that examined how a mobile robot moved, the robot would unexpectedly pause or rotate itself to recover from a navigation problem. The serendipity of the recovery procedure and people’s understanding of it became a case study to examine how future study designs could consider breakdowns better and look at suggestions for better robot behaviors in such situations. We present the original experiment with the recovery procedure. We then examine the responses from the participants in this experiment qualitatively to see how they interpreted the breakdown situation when it occurred. Responses could be grouped into themes of sentience, competence, and the robot’s forms. The themes indicate that the robot’s movement communicated different information to different participants. This leads us to introduce the concept of movement acts to help examine the explicit and implicit parts of communication in movement. Given that we developed the concept looking at an unexpected breakdown, we suggest that researchers should plan for the possibility of breakdowns in experiments and examine and report people’s experience around a robot breakdown to further explore unintended robot communication.


2021 ◽  
Vol 12 (1) ◽  
pp. 437-453
Author(s):  
Laurentiu Vasiliu ◽  
Keith Cortis ◽  
Ross McDermott ◽  
Aphra Kerr ◽  
Arne Peters ◽  
...  

Abstract This article explores the rapidly advancing innovation to endow robots with social intelligence capabilities in the form of multilingual and multimodal emotion recognition, and emotion-aware decision-making capabilities, for contextually appropriate robot behaviours and cooperative social human–robot interaction for the healthcare domain. The objective is to enable robots to become trustworthy and versatile social robots capable of having human-friendly and human assistive interactions, utilised to better assist human users’ needs by enabling the robot to sense, adapt, and respond appropriately to their requirements while taking into consideration their wider affective, motivational states, and behaviour. We propose an innovative approach to the difficult research challenge of endowing robots with social intelligence capabilities for human assistive interactions, going beyond the conventional robotic sense-think-act loop. We propose an architecture that addresses a wide range of social cooperation skills and features required for real human–robot social interaction, which includes language and vision analysis, dynamic emotional analysis (long-term affect and mood), semantic mapping to improve the robot’s knowledge of the local context, situational knowledge representation, and emotion-aware decision-making. Fundamental to this architecture is a normative ethical and social framework adapted to the specific challenges of robots engaging with caregivers and care-receivers.


2021 ◽  
Vol 12 (1) ◽  
pp. 199-213
Author(s):  
Pauline Chevalier ◽  
Valentina Vasco ◽  
Cesco Willemse ◽  
Davide De Tommaso ◽  
Vadim Tikhanoff ◽  
...  

Abstract We investigated the influence of visual sensitivity on the performance of an imitation task with the robot R1 in its virtual and physical forms. Virtual and physical embodiments offer different sensory experience to the users. As all individuals respond differently to their sensory environment, their sensory sensitivity may play a role in the interaction with a robot. Investigating how sensory sensitivity can influence the interactions appears to be a helpful tool to evaluate and design such interactions. Here we asked 16 participants to perform an imitation task, with a virtual and a physical robot under conditions of full and occluded visibility, and to report the strategy they used to perform this task. We asked them to complete the Sensory Perception Quotient questionnaire. Sensory sensitivity in vision predicted the participants’ performance in imitating the robot’s upper limb movements. From the self-report questionnaire, we observed that the participants relied more on visual sensory cues to perform the task with the physical robot than on the virtual robot. From these results, we propose that a physical embodiment enables the user to invest a lower cognitive effort when performing an imitation task over a virtual embodiment. The results presented here are encouraging that following this line of research is suitable to improve and evaluate the effects of the physical and virtual embodiment of robots for applications in healthy and clinical settings.


2021 ◽  
Vol 12 (1) ◽  
pp. 379-391
Author(s):  
Matthew Story ◽  
Cyril Jaksic ◽  
Sarah R. Fletcher ◽  
Philip Webb ◽  
Gilbert Tang ◽  
...  

Abstract Although the principles followed by modern standards for interaction between humans and robots follow the First Law of Robotics popularized in science fiction in the 1960s, the current standards regulating the interaction between humans and robots emphasize the importance of physical safety. However, they are less developed in another key dimension: psychological safety. As sales of industrial robots have been increasing over recent years, so has the frequency of human–robot interaction (HRI). The present article looks at the current safety guidelines for HRI in an industrial setting and assesses their suitability. This article then presents a means to improve current standards utilizing lessons learned from studies into human aware navigation (HAN), which has seen increasing use in mobile robotics. This article highlights limitations in current research, where the relationships established in mobile robotics have not been carried over to industrial robot arms. To understand this, it is necessary to focus less on how a robot arm avoids humans and more on how humans react when a robot is within the same space. Currently, the safety guidelines are behind the technological advance, however, with further studies aimed at understanding HRI and applying it to newly developed path finding and obstacle avoidance methods, science fiction can become science fact.


2021 ◽  
Vol 12 (1) ◽  
pp. 423-436
Author(s):  
Alexander M. Aroyo ◽  
Jan de Bruyne ◽  
Orian Dheu ◽  
Eduard Fosch-Villaronga ◽  
Aleksei Gudkov ◽  
...  

Abstract There is increasing attention given to the concept of trustworthiness for artificial intelligence and robotics. However, trust is highly context-dependent, varies among cultures, and requires reflection on others’ trustworthiness, appraising whether there is enough evidence to conclude that these agents deserve to be trusted. Moreover, little research exists on what happens when too much trust is placed in robots and autonomous systems. Conceptual clarity and a shared framework for approaching overtrust are missing. In this contribution, we offer an overview of pressing topics in the context of overtrust and robots and autonomous systems. Our review mobilizes insights solicited from in-depth conversations from a multidisciplinary workshop on the subject of trust in human–robot interaction (HRI), held at a leading robotics conference in 2020. A broad range of participants brought in their expertise, allowing the formulation of a forward-looking research agenda on overtrust and automation biases in robotics and autonomous systems. Key points include the need for multidisciplinary understandings that are situated in an eco-system perspective, the consideration of adjacent concepts such as deception and anthropomorphization, a connection to ongoing legal discussions through the topic of liability, and a socially embedded understanding of overtrust in education and literacy matters. The article integrates diverse literature and provides a ground for common understanding for overtrust in the context of HRI.


2021 ◽  
Vol 12 (1) ◽  
pp. 392-401
Author(s):  
Alexander Wilkinson ◽  
Michael Gonzales ◽  
Patrick Hoey ◽  
David Kontak ◽  
Dian Wang ◽  
...  

Abstract The design of user interfaces (UIs) for assistive robot systems can be improved through the use of a set of design guidelines presented in this article. As an example, the article presents two different UI designs for an assistive manipulation robot system. We explore the design considerations from these two contrasting UIs. The first is referred to as the graphical user interface (GUI), which the user operates entirely through a touchscreen as a representation of the state of the art. The second is a type of novel UI referred to as the tangible user interface (TUI). The TUI makes use of devices in the real world, such as laser pointers and a projector–camera system that enables augmented reality. Each of these interfaces is designed to allow the system to be operated by an untrained user in an open environment such as a grocery store. Our goal is for these guidelines to aid researchers in the design of human–robot interaction for assistive robot systems, particularly when designing multiple interaction methods for direct comparison.


2021 ◽  
Vol 12 (1) ◽  
pp. 299-309
Author(s):  
Jamie Wallace

Abstract Following ethnographic studies of Danish companies, this article examines how small- and medium-sized companies are implementing cobots into their manufacturing systems and considers how this is changing the practices of technicians and operators alike. It considers how this changes human values and has ethical consequences for the companies involved. By presenting a range of dilemmas arising during emergent processes, it raises questions about the extent to which ethics can be regulated and predetermined in processes of robot implementation and the resulting reconfiguration of work.


Sign in / Sign up

Export Citation Format

Share Document