scholarly journals A Proposal of Accessibility Guidelines for Human-Robot Interaction

Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 561
Author(s):  
Malak Qbilat ◽  
Ana Iglesias ◽  
Tony Belpaeme

We will increasingly become dependent on automation to support our manufacturing and daily living, and robots are likely to take an important place in this. Unfortunately, currently not all the robots are accessible for all users. This is due to the different characteristics of users, as users with visual, hearing, motor or cognitive disabilities were not considered during the design, implementation or interaction phase, causing accessibility barriers to users who have limitations. This research presents a proposal for accessibility guidelines for human-robot interaction (HRI). The guidelines have been evaluated by seventeen HRI designers and/or developers. A questionnaire of nine five-point Likert Scale questions and 6 open-ended questions was developed to evaluate the proposed guidelines for developers and designers, in terms of four main factors: usability, social acceptance, user experience and social impact. The questions act as indicators for each factor. The majority (15 of 17 participants) agreed that the guidelines are helpful for them to design and implement accessible robot interfaces and applications. Some of them had considered some ad hoc guidelines in their design practice, but none of them showed awareness of or had applied all the proposed guidelines in their design practice, 72% of the proposed guidelines have been applied by less than or equal to 8 participants for each guideline. Moreover, 16 of 17 participants would use the proposed guidelines in their future robot designs or evaluation. The participants recommended the importance of aligning the proposed guidelines with safety requirements, environment of interaction (indoor or outdoor), cost and users’ expectations.

Author(s):  
Zita V. Farkas ◽  
Gergely Nádas ◽  
 József Kolossa ◽  
Péter Korondi

Service robot technology is progressing at a fast pace. Accurate robot-friendly indoor localization and harmonization of built environ-ment in alignment with digital, physical, and social environment becomes emphasized. This paper proposes the novel approach of Robot Compatible Environment (RCE) within the architectural space. Evolution of service robotics in connection with civil engineering and architecture is discussed, whereas optimum performance is to be achieved based on robots’ capabilities and spatial affordances. For ubiquitous and safe human-robot interaction, robots are to be integrated into the living environment. The aim of the research is to highlight solutions for various interconnected challenges within the built environment. Our goal is to reach findings on comparison of robotic and accessibility standards, synthesis of navigation, access to information and social acceptance. Checklists, recommendations, and design process are introduced within the RCE framework, proposing a holistic approach.


Mathematics ◽  
2021 ◽  
Vol 9 (9) ◽  
pp. 1063
Author(s):  
Eleni Vrochidou ◽  
Chris Lytridis ◽  
Christos Bazinas ◽  
George A. Papakostas ◽  
Hiroaki Wagatsuma ◽  
...  

Cyber-Physical System (CPS) applications including human-robot interaction call for automated reasoning for rational decision-making. In the latter context, typically, audio-visual signals are employed. Τhis work considers brain signals for emotion recognition towards an effective human-robot interaction. An ElectroEncephaloGraphy (EEG) signal here is represented by an Intervals’ Number (IN). An IN-based, optimizable parametric k Nearest Neighbor (kNN) classifier scheme for decision-making by fuzzy lattice reasoning (FLR) is proposed, where the conventional distance between two points is replaced by a fuzzy order function (σ) for reasoning-by-analogy. A main advantage of the employment of INs is that no ad hoc feature extraction is required since an IN may represent all-order data statistics, the latter are the features considered implicitly. Four different fuzzy order functions are employed in this work. Experimental results demonstrate comparably the good performance of the proposed techniques.


2021 ◽  
Vol 14 ◽  
Author(s):  
Zhihao Li ◽  
Yishan Mu ◽  
Zhenglong Sun ◽  
Sifan Song ◽  
Jionglong Su ◽  
...  

With the rapid development of robotic and AI technology in recent years, human–robot interaction has made great advancement, making practical social impact. Verbal commands are one of the most direct and frequently used means for human–robot interaction. Currently, such technology can enable robots to execute pre-defined tasks based on simple and direct and explicit language instructions, e.g., certain keywords must be used and detected. However, that is not the natural way for human to communicate. In this paper, we propose a novel task-based framework to enable the robot to comprehend human intentions using visual semantics information, such that the robot is able to satisfy human intentions based on natural language instructions (total three types, namely clear, vague, and feeling, are defined and tested). The proposed framework includes a language semantics module to extract the keywords despite the explicitly of the command instruction, a visual object recognition module to identify the objects in front of the robot, and a similarity computation algorithm to infer the intention based on the given task. The task is then translated into the commands for the robot accordingly. Experiments are performed and validated on a humanoid robot with a defined task: to pick the desired item out of multiple objects on the table, and hand over to one desired user out of multiple human participants. The results show that our algorithm can interact with different types of instructions, even with unseen sentence structures.


2021 ◽  
Vol 11 (19) ◽  
pp. 9165
Author(s):  
Ruben Alonso ◽  
Emanuele Concas ◽  
Diego Reforgiato Recupero

A lot of people have neuromuscular problems that affect their lives leading them to lose an important degree of autonomy in their daily activities. When their disabilities do not involve speech disorders, robotic wheelchairs with voice assistant technologies may provide appropriate human–robot interaction for them. Given the wide improvement and diffusion of Google Assistant, Apple’s Siri, Microsoft’s Cortana, Amazon’s Alexa, etc., such voice assistant technologies can be fully integrated and exploited in robotic wheelchairs to improve the quality of life of affected people. As such, in this paper, we propose an abstraction layer capable of providing appropriate human–robot interaction. It allows use of voice assistant tools that may trigger different kinds of applications for the interaction between the robot and the user. Furthermore, we propose a use case as a possible instance of the considered abstraction layer. Within the use case, we chose existing tools for each component of the proposed abstraction layer. For example, Google Assistant was employed as a voice assistant tool; its functions and APIs were leveraged for some of the applications we deployed. On top of the use case thus defined, we created several applications that we detail and discuss. The benefit of the resulting Human–Computer Interaction is therefore two-fold: on the one hand, the user may interact with any of the developed applications; on the other hand, the user can also rely on voice assistant tools to receive answers in the open domain when the statement of the user does not enable any of the applications of the robot. An evaluation of the presented instance was carried out using the Software Architecture Analysis Method, whereas the user experience was evaluated through ad-hoc questionnaires. Our proposed abstraction layer is general and can be instantiated on any robotic platform including robotic wheelchairs.


2009 ◽  
Author(s):  
Matthew S. Prewett ◽  
Kristin N. Saboe ◽  
Ryan C. Johnson ◽  
Michael D. Coovert ◽  
Linda R. Elliott

2010 ◽  
Author(s):  
Eleanore Edson ◽  
Judith Lytle ◽  
Thomas McKenna

Sign in / Sign up

Export Citation Format

Share Document