Human–robot interaction via voice-controllable intelligent user interface

Robotica ◽  
2007 ◽  
Vol 25 (5) ◽  
pp. 521-527 ◽  
Author(s):  
Harsha Medicherla ◽  
Ali Sekmen

SUMMARYAn understanding of how humans and robots can successfully interact to accomplish specific tasks is crucial in creating more sophisticated robots that may eventually become an integral part of human societies. A social robot needs to be able to learn the preferences and capabilities of the people with whom it interacts so that it can adapt its behaviors for more efficient and friendly interaction. Advances in human– computer interaction technologies have been widely used in improving human–robot interaction (HRI). It is now possible to interact with robots via natural communication means such as speech. In this paper, an innovative approach for HRI via voice-controllable intelligent user interfaces is described. The design and implementation of such interfaces are described. The traditional approaches for human–robot user interface design are explained and the advantages of the proposed approach are presented. The designed intelligent user interface, which learns user preferences and capabilities in time, can be controlled with voice. The system was successfully implemented and tested on a Pioneer 3-AT mobile robot. 20 participants, who were assessed on spatial reasoning ability, directed the robot in spatial navigation tasks to evaluate the effectiveness of the voice control in HRI. Time to complete the task, number of steps, and errors were collected. Results indicated that spatial reasoning ability and voice-control were reliable predictors of efficiency of robot teleoperation. 75% of the subjects with high spatial reasoning ability preferred using voice-control over manual control. The effect of spatial reasoning ability in teleoperation with voice-control was lower compared to that of manual control.

2021 ◽  
Vol 12 (1) ◽  
pp. 392-401
Author(s):  
Alexander Wilkinson ◽  
Michael Gonzales ◽  
Patrick Hoey ◽  
David Kontak ◽  
Dian Wang ◽  
...  

Abstract The design of user interfaces (UIs) for assistive robot systems can be improved through the use of a set of design guidelines presented in this article. As an example, the article presents two different UI designs for an assistive manipulation robot system. We explore the design considerations from these two contrasting UIs. The first is referred to as the graphical user interface (GUI), which the user operates entirely through a touchscreen as a representation of the state of the art. The second is a type of novel UI referred to as the tangible user interface (TUI). The TUI makes use of devices in the real world, such as laser pointers and a projector–camera system that enables augmented reality. Each of these interfaces is designed to allow the system to be operated by an untrained user in an open environment such as a grocery store. Our goal is for these guidelines to aid researchers in the design of human–robot interaction for assistive robot systems, particularly when designing multiple interaction methods for direct comparison.


Author(s):  
Randall Spain ◽  
Jason Saville ◽  
Barry Lui ◽  
Donia Slack ◽  
Edward Hill ◽  
...  

Because advances in broadband capabilities will soon allow first responders to access and use many forms of data when responding to emergencies, it is becoming critically important to design heads-up displays to present first responders with information in a manner that does not induce extraneous mental workload or cause undue interaction errors. Virtual reality offers a unique medium for envisioning and testing user interface concepts in a realistic and controlled environment. In this paper, we describe a virtual reality-based emergency response scenario that was designed to support user experience research for evaluating the efficacy of intelligent user interfaces for firefighters. We describe the results of a usability test that captured firefighters’ feedback and reactions to the VR scenario and the prototype intelligent user interface that presented them with task critical information through the VR headset. The paper concludes with lessons learned from our development process and a discussion of plans for future research.


2020 ◽  
Author(s):  
Sebastijan Veselic ◽  
Claudio Zito ◽  
Dario Farina

Designing robotic assistance devices for manipulation tasks is challenging. This work aims at improving accuracy and usability of physical human-robot interaction (pHRI) where a user interacts with a physical robotic device (e.g., a human operated manipulator or exoskeleton) by transmitting signals which need to be interpreted by the machine. Typically these signals are used as an open-loop control, but this approach has several limitations such as low take-up and high cognitive burden for the user. In contrast, a control framework is proposed that can respond robustly and efficiently to intentions of a user by reacting proactively to their commands. The key insight is to include context- and user-awareness in the controller, improving decision making on how to assist the user. Context-awareness is achieved by creating a set of candidate grasp targets and reach-to grasp trajectories in a cluttered scene. User-awareness is implemented as a linear time-variant feedback controller (TV-LQR) over the generated trajectories to facilitate the motion towards the most likely intention of a user. The system also dynamically recovers from incorrect predictions. Experimental results in a virtual environment of two degrees of freedom control show the capability of this approach to outperform manual control. By robustly predicting the user’s intention, the proposed controller allows the subject to achieve superhuman performance in terms of accuracy and thereby usability.


2021 ◽  
Author(s):  
Jide Ebenezer Taiwo Akinsola ◽  
Samuel Akinseinde ◽  
Olamide Kalesanwo ◽  
Moruf Adeagbo ◽  
Kayode Oladapo ◽  
...  

In recent years, Cyber Security threat modeling has been discovered to have the capacity of combatting and mitigating against online threats. In order to minimize the associated risk, these threats need to be modelled with appropriate Intelligent User Interface (IUI) design and consequently the development and evaluation of threat metrics. Artificial Intelligence (AI) has revolutionized every facet of our daily lives and building a responsive Cyber Security Threat Model requires an IUI. The current threat models lack IUI, hence they cannot deliver convenience and efficiency. However, as the User Interface (UI) functionalities and User Experience (UX) continue to increase and deliver more astonishing possibilities, the present threat models lack the predictability capacity thus Machine Learning paradigms must be incorporated. Meanwhile, this deficiency can only be handled through AI-enabled UI that utilizes baseline principles in the design of interfaces for effective Human-Machine Interaction (HMI) with lasting UX. IUI helps developers or designers enhance flexibility, usability, and the relevance of the interaction to improving communication between computer and human. Baseline principles must be applied for developing threat models that will ensure fascinating UI-UX. Application of AI in UI design for Cyber Security Threat Modeling brings about reduction in critical design time and ensures the development of better threat modeling applications and solutions.


Author(s):  
Fotios Papadopoulos ◽  
Kerstin Dautenhahn ◽  
Wan Ching Ho

AbstractThis article describes the design and evaluation of AIBOStory - a novel, remote interactive story telling system that allows users to create and share common stories through an integrated, autonomous robot companion acting as a social mediator between two remotely located people. The behaviour of the robot was inspired by dog behaviour, including a simple computational memory model. AIBOStory has been designed to work alongside online video communication software and aims to enrich remote communication experiences over the internet. An initial pilot study evaluated the proposed system’s use and acceptance by the users. Five pairs of participants were exposed to the system, with the robot acting as a social mediator, and the results suggested an overall positive acceptance response. The main study involved long-term interactions of 20 participants using AIBOStory in order to study their preferences between two modes: using the game enhanced with an autonomous robot and a non-robot mode which did not use the robot. Instruments used in this study include multiple questionnaires from different communication sessions, demographic forms and logged data from the robots and the system. The data was analysed using quantitative and qualitative techniques to measure user preference and human-robot interaction. The statistical analysis suggests user preferences towards the robot mode.


Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6529
Author(s):  
Masaya Iwasaki ◽  
Mizuki Ikeda ◽  
Tatsuyuki Kawamura ◽  
Hideyuki Nakanishi

Robotic salespeople are often ignored by people due to their weak social presence, and thus have difficulty facilitating sales autonomously. However, for robots that are remotely controlled by humans, there is a need for experienced and trained operators. In this paper, we suggest crowdsourcing to allow general users on the internet to operate a robot remotely and facilitate customers’ purchasing activities while flexibly responding to various situations through a user interface. To implement this system, we examined how our developed remote interface can improve a robot’s social presence while being controlled by a human operator, including first-time users. Therefore, we investigated the typical flow of a customer–robot interaction that was effective for sales promotion, and modeled it as a state transition with automatic functions by accessing the robot’s sensor information. Furthermore, we created a user interface based on the model and examined whether it was effective in a real environment. Finally, we conducted experiments to examine whether the user interface could be operated by an amateur user and enhance the robot’s social presence. The results revealed that our model was able to improve the robot’s social presence and facilitate customers’ purchasing activity even when the operator was a first-time user.


Sign in / Sign up

Export Citation Format

Share Document