scholarly journals Human Pointing Navigation Interface for Mobile Robot with Spherical Vision System

Author(s):  
Yasutake Takahashi ◽  
◽  
Kyohei Yoshida ◽  
Fuminori Hibino ◽  
Yoichiro Maeda

Human-robot interaction requires intuitive interface that is not possible using devices, such as, the joystick or teaching pendant, which also require some trainings. Instruction by gesture is one example of an intuitive interfaces requiring no training, and pointing is one of the simplest gestures. We propose simple pointing recognition for a mobile robot having an upwarddirected camera system. The robot using this recognizes pointing and navigates through simple visual feedback control to where the user points. This paper explores the feasibility and utility of our proposal as shown by the results of a questionnaire on proposed and conventional interfaces.

2017 ◽  
Vol 2017 ◽  
pp. 1-16
Author(s):  
Enrique Fernández-Rodicio ◽  
Víctor González-Pacheco ◽  
José Carlos Castillo ◽  
Álvaro Castro-González ◽  
María Malfaz ◽  
...  

Projectors have become a widespread tool to share information in Human-Robot Interaction with large groups of people in a comfortable way. Finding a suitable vertical surface becomes a problem when the projector changes positions when a mobile robot is looking for suitable surfaces to project. Two problems must be addressed to achieve a correct undistorted image: (i) finding the biggest suitable surface free from obstacles and (ii) adapting the output image to correct the distortion due to the angle between the robot and a nonorthogonal surface. We propose a RANSAC-based method that detects a vertical plane inside a point cloud. Then, inside this plane, we apply a rectangle-fitting algorithm over the region in which the projector can work. Finally, the algorithm checks the surface looking for imperfections and occlusions and transforms the original image using a homography matrix to display it over the area detected. The proposed solution can detect projection areas in real-time using a single Kinect camera, which makes it suitable for applications where a robot interacts with other people in unknown environments. Our Projection Surfaces Detector and the Image Correction module allow a mobile robot to find the right surface and display images without deformation, improving its ability to interact with people.


Author(s):  
Soo-Han Kang ◽  
Ji-Hyeong Han

AbstractRobot vision provides the most important information to robots so that they can read the context and interact with human partners successfully. Moreover, to allow humans recognize the robot’s visual understanding during human-robot interaction (HRI), the best way is for the robot to provide an explanation of its understanding in natural language. In this paper, we propose a new approach by which to interpret robot vision from an egocentric standpoint and generate descriptions to explain egocentric videos particularly for HRI. Because robot vision equals to egocentric video on the robot’s side, it contains as much egocentric view information as exocentric view information. Thus, we propose a new dataset, referred to as the global, action, and interaction (GAI) dataset, which consists of egocentric video clips and GAI descriptions in natural language to represent both egocentric and exocentric information. The encoder-decoder based deep learning model is trained based on the GAI dataset and its performance on description generation assessments is evaluated. We also conduct experiments in actual environments to verify whether the GAI dataset and the trained deep learning model can improve a robot vision system


Robotics ◽  
2010 ◽  
Author(s):  
N. Elkmann ◽  
E. Schulenburg ◽  
M. Fritzsche

2009 ◽  
Vol 21 (6) ◽  
pp. 739-748 ◽  
Author(s):  
Albert Causo ◽  
◽  
Etsuko Ueda ◽  
Kentaro Takemura ◽  
Yoshio Matsumoto ◽  
...  

Hand pose estimation using a multi-camera system allows natural non-contact interfacing unlike when using bulky data gloves. To enable any user to use the system regardless of gender or physical differences such as hand size, we propose hand model individualization using only multiple cameras. From the calibration motion, our method estimates the finger link lengths as well as the hand shape by minimizing the gap between the hand model and observation. We confirmed the feasibility of our proposal by comparing 1) actual and estimated link lengths and 2) hand pose estimation results using our calibrated hand model, a prior hand model and data obtained from data glove measurements.


Sign in / Sign up

Export Citation Format

Share Document