scholarly journals Hands-Free Human-Robot Interaction Using Multimodal Gestures and Deep Learning in Wearable Mixed Reality

IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Kyeong-Beom Park ◽  
Sung Ho Choi ◽  
Jae Yeol Lee ◽  
Yalda Ghasemi ◽  
Mustafa Mohammed ◽  
...  
Author(s):  
Soo-Han Kang ◽  
Ji-Hyeong Han

AbstractRobot vision provides the most important information to robots so that they can read the context and interact with human partners successfully. Moreover, to allow humans recognize the robot’s visual understanding during human-robot interaction (HRI), the best way is for the robot to provide an explanation of its understanding in natural language. In this paper, we propose a new approach by which to interpret robot vision from an egocentric standpoint and generate descriptions to explain egocentric videos particularly for HRI. Because robot vision equals to egocentric video on the robot’s side, it contains as much egocentric view information as exocentric view information. Thus, we propose a new dataset, referred to as the global, action, and interaction (GAI) dataset, which consists of egocentric video clips and GAI descriptions in natural language to represent both egocentric and exocentric information. The encoder-decoder based deep learning model is trained based on the GAI dataset and its performance on description generation assessments is evaluated. We also conduct experiments in actual environments to verify whether the GAI dataset and the trained deep learning model can improve a robot vision system


2019 ◽  
Vol 14 (1) ◽  
pp. 22-30
Author(s):  
Dongkeon Park ◽  
◽  
Kyeong-Min Kang ◽  
Jin-Woo Bae ◽  
Ji-Hyeong Han

Author(s):  
Mikhail Ostanin ◽  
Stanislav Mikhel ◽  
Alexey Evlampiev ◽  
Valeria Skvortsova ◽  
Alexandr Klimchik

2020 ◽  
Vol 53 (5) ◽  
pp. 750-755
Author(s):  
Lei Shi ◽  
Cosmin Copot ◽  
Steve Vanlanduit

Sign in / Sign up

Export Citation Format

Share Document