RTM (Robot Technology Middleware) Based Dynamic Interrupt System for Communication Between Humans and Robots

Author(s):  
Hai An Vu ◽  
◽  
Fangyan Dong ◽  
Kaoru Hirota

A dynamic fuzzy interrupt system is proposed to optimize responsiveness of robots in humans-robots communication. The system guarantees a continuous and efficient human-robot interaction by assigning priority to the given instruction and it remains immune to distractions caused by conflicting instructions. The system is embedded in the robot technology components that are able to implement by different programming languages, under the different operating system. Experiments to simulate actions of mascot robot system such as the eye robot, mobile robot, and information retrieval engine involving 10,000 instructions show a 14% improvement in the system responsiveness and a delay time of less than 2 seconds. The implementation of home party scenario between four humans (a host and three guests) and five eye robots shows the affective communication, and confirms to facilitate casual communication between humans and robots with limited interruption.

Author(s):  
Xinmeng Li ◽  
Mamoun Alazab ◽  
Qian Li ◽  
Keping Yu ◽  
Quanjun Yin

AbstractKnowledge graph question answering is an important technology in intelligent human–robot interaction, which aims at automatically giving answer to human natural language question with the given knowledge graph. For the multi-relation question with higher variety and complexity, the tokens of the question have different priority for the triples selection in the reasoning steps. Most existing models take the question as a whole and ignore the priority information in it. To solve this problem, we propose question-aware memory network for multi-hop question answering, named QA2MN, to update the attention on question timely in the reasoning process. In addition, we incorporate graph context information into knowledge graph embedding model to increase the ability to represent entities and relations. We use it to initialize the QA2MN model and fine-tune it in the training process. We evaluate QA2MN on PathQuestion and WorldCup2014, two representative datasets for complex multi-hop question answering. The result demonstrates that QA2MN achieves state-of-the-art Hits@1 accuracy on the two datasets, which validates the effectiveness of our model.


Robotica ◽  
2020 ◽  
Vol 38 (10) ◽  
pp. 1715-1716
Author(s):  
Nikos Aspragathos ◽  
Vassilis Moulianitis ◽  
Panagiotis Koustoumpardis

Human–robot interaction (HRI) is one of the most rapidly growing research fields in robotics and promising for the future of robotics technology. Despite the fact that numerous significant research results in HRI have been presented during the last years, there are still challenges in several critical topics of HRI, which could be summarized as: (i) collision and safety, (ii) virtual guides, (iii) cooperative manipulation, (iv) teleoperation and haptic interfaces, and (v) learning by observation or demonstration. In physical HRI research, the complementarity of the human and the robot capabilities is carefully considered for the advancement of their cooperation in a safe manner. New advanced control systems should be developed so the robot will acquire the ability to adapt easily to the human intentions and to the given task. The possible applications requiring co-manipulation are cooperative transportation of bulky and heavy objects, manufacturing processes such as assembly and surgery.


2019 ◽  
pp. 70-91
Author(s):  
A.A. Karpov ◽  
S.F. Sergeev ◽  
O.I. Lakhin ◽  
M.V. Mikhayluk ◽  
B.I. Kryuchkov ◽  
...  

The use of robotic systems (RSs) in future manned space missions requires the creation of the cosmonaut-researcher a holistic view on the forms of interaction within the “human – robot” system (HRS) under the adverse environmental conditions. For these purposes, educational and reference materials (ERMs) are needed in fields of ergonomics and its representation in the design of human-machine interfaces (HMI). The paper considers the application of the ontological approach in the actual subject area – the ergonomics of the HMI, as the way of interdisciplinary integration various scientific fields – Informatics, ergonomics, psychophysiology, etc.


i-com ◽  
2017 ◽  
Vol 16 (2) ◽  
pp. 71-85
Author(s):  
Philipp Graf ◽  
Manuela Marquardt ◽  
Diego Compagna

AbstractWe conducted a Human-Robot Interaction (HRI) study during a science event, using a mixed method experimental approach with quantitative and qualitative data (adapted version of Godspeed Questionnaire and audio-visual material analysed videographically). The main purpose of the research was to gather insight into the relevance of the so-called “point of interaction” for a successful and user-friendly interaction with a non-anthropomorphic robot. We elaborate on this concept with reference to sociological theories under the heading of “addressability” and “social address” and generate hypotheses informed by former research and theoretical reflections. We implement an interface on our robot system, comprising two LEDs, which indicate the status of the robot/interaction, and which might possibly serve as basal form of embodied social address. In one experimental condition, the movements were accompanied by a light choreography, the other one was conducted without the LEDs. Our findings suggest a potential relevance of social address for the interaction partner to receive additional information, especially if the situation is a contingent one. Nevertheless, the overall rating on the Godspeed scales showed no significant differences between the light conditions. Several possible reasons for this are discussed. Limitations and advantages are pointed out in the conclusion.


Robotics ◽  
2010 ◽  
Author(s):  
N. Elkmann ◽  
E. Schulenburg ◽  
M. Fritzsche

2020 ◽  
Vol 32 (1) ◽  
pp. 224-235
Author(s):  
Wei-Fen Hsieh ◽  
◽  
Eri Sato-Shimokawara ◽  
Toru Yamaguchi

In our daily conversation, we obtain considerable information from our interlocutor’s non-verbal behaviors, such as gaze and gestures. Several studies have shown that nonverbal messages are prominent factors in smoothing the process of human-robot interaction. Our previous studies have shown that not only a robot’s appearance but also its gestures, tone, and other nonverbal factors influence a person’s impression of it. The paper presented an analysis of the impressions made when human motions are implemented on a humanoid robot, and experiments were conducted to evaluate impressions made by robot expressions to analyze the sensations. The results showed the relation between robot expression patterns and human preferences. To further investigate biofeedback elicited by different robot styles of expression, a scenario-based experiment was done. The results revealed that people’s emotions can definitely be affected by robot behavior, and the robot’s way of expressing itself is what most influences whether or not it is perceived as friendly. The results show that it is potentially useful to combine our concept into a robot system to meet individual needs.


Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3033
Author(s):  
Soheil Keshmiri ◽  
Masahiro Shiomi ◽  
Hidenobu Sumioka ◽  
Takashi Minato ◽  
Hiroshi Ishiguro

Touch plays a crucial role in humans’ nonverbal social and affective communication. It then comes as no surprise to observe a considerable effort that has been placed on devising methodologies for automated touch classification. For instance, such an ability allows for the use of smart touch sensors in such real-life application domains as socially-assistive robots and embodied telecommunication. In fact, touch classification literature represents an undeniably progressive result. However, these results are limited in two important ways. First, they are mostly based on overall (i.e., average) accuracy of different classifiers. As a result, they fall short in providing an insight on performance of these approaches as per different types of touch. Second, they do not consider the same type of touch with different level of strength (e.g., gentle versus strong touch). This is certainly an important factor that deserves investigating since the intensity of a touch can utterly transform its meaning (e.g., from an affectionate gesture to a sign of punishment). The current study provides a preliminary investigation of these shortcomings by considering the accuracy of a number of classifiers for both, within- (i.e., same type of touch with differing strengths) and between-touch (i.e., different types of touch) classifications. Our results help verify the strength and shortcoming of different machine learning algorithms for touch classification. They also highlight some of the challenges whose solution concepts can pave the path for integration of touch sensors in such application domains as human–robot interaction (HRI).


Sign in / Sign up

Export Citation Format

Share Document