Towards better eye tracking in human robot interaction using an affordable active vision system

Author(s):  
Oskar Palinko ◽  
Alessandra Sciutti ◽  
Francesco Rea ◽  
Giulio Sandini
2012 ◽  
Vol 09 (03) ◽  
pp. 1250024 ◽  
Author(s):  
MARTIN HÜLSE ◽  
SEBASTIAN McBRIDE ◽  
MARK LEE

Eye fixation and gaze fixation patterns in general play an important part when humans interact with each other. Also, gaze fixation patterns of humans are highly determined by the task they perform. Our assumption is that meaningful human–robot interaction with robots having active vision components (such a humanoids) is highly supported if the robot system is able to create task modulated fixation patterns. We present an architecture for a robot active vision system equipped with one manipulator where we demonstrate the generation of task modulated gaze control, meaning that fixation patterns are in accordance with a specific task the robot has to perform. Experiments demonstrate different strategies of multi-modal task modulation for robotic active vision where visual and nonvisual features (tactile feedback) determine gaze fixation patterns. The results are discussed in comparison to purely saliency based strategies toward visual attention and gaze control. The major advantages of our approach to multi-modal task modulation is that the active vision system can generate, first, active avoidance of objects, and second, active engagement with objects. Such behaviors cannot be generated by current approaches of visual attention which are based on saliency models only, but they are important for mimicking human-like gaze fixation patterns.


Robotics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 54
Author(s):  
Lorenzo Scalera ◽  
Stefano Seriani ◽  
Paolo Gallina ◽  
Mattia Lentini ◽  
Alessandro Gasparetto

In this paper, authors present a novel architecture for controlling an industrial robot via an eye tracking interface for artistic purposes. Humans and robots interact thanks to an acquisition system based on an eye tracker device that allows the user to control the motion of a robotic manipulator with his gaze. The feasibility of the robotic system is evaluated with experimental tests in which the robot is teleoperated to draw artistic images. The tool can be used by artists to investigate novel forms of art and by amputees or people with movement disorders or muscular paralysis, as an assistive technology for artistic drawing and painting, since, in these cases, eye motion is usually preserved.


Author(s):  
Soo-Han Kang ◽  
Ji-Hyeong Han

AbstractRobot vision provides the most important information to robots so that they can read the context and interact with human partners successfully. Moreover, to allow humans recognize the robot’s visual understanding during human-robot interaction (HRI), the best way is for the robot to provide an explanation of its understanding in natural language. In this paper, we propose a new approach by which to interpret robot vision from an egocentric standpoint and generate descriptions to explain egocentric videos particularly for HRI. Because robot vision equals to egocentric video on the robot’s side, it contains as much egocentric view information as exocentric view information. Thus, we propose a new dataset, referred to as the global, action, and interaction (GAI) dataset, which consists of egocentric video clips and GAI descriptions in natural language to represent both egocentric and exocentric information. The encoder-decoder based deep learning model is trained based on the GAI dataset and its performance on description generation assessments is evaluated. We also conduct experiments in actual environments to verify whether the GAI dataset and the trained deep learning model can improve a robot vision system


Author(s):  
Oliver Damm ◽  
Karoline Malchus ◽  
Petra Jaecks ◽  
Soeren Krach ◽  
Frieder Paulus ◽  
...  

Author(s):  
Yasutake Takahashi ◽  
◽  
Kyohei Yoshida ◽  
Fuminori Hibino ◽  
Yoichiro Maeda

Human-robot interaction requires intuitive interface that is not possible using devices, such as, the joystick or teaching pendant, which also require some trainings. Instruction by gesture is one example of an intuitive interfaces requiring no training, and pointing is one of the simplest gestures. We propose simple pointing recognition for a mobile robot having an upwarddirected camera system. The robot using this recognizes pointing and navigates through simple visual feedback control to where the user points. This paper explores the feasibility and utility of our proposal as shown by the results of a questionnaire on proposed and conventional interfaces.


Author(s):  
Marie D. Manner

We describe experiments performed with a large number of preschool children (ages 1.5 to 4 years) in a two-task eye tracking experiment and a human-robot interaction experiment. The resulting data of mostly neuro-typical children forms a baseline with which to compare children with autism, allowing us to further characterize the autism phenotype. Eye tracking task results indicate a strong preference for a humanoid robot and a social being (a four year old girl) over other robot types. Results from the human-robot interaction task, a semi-structured play interaction between child and robot, showed we can cluster participants based on social distances and other social responsiveness metrics.


Sign in / Sign up

Export Citation Format

Share Document