scholarly journals Projection Surfaces Detection and Image Correction for Mobile Robots in HRI

2017 ◽  
Vol 2017 ◽  
pp. 1-16
Author(s):  
Enrique Fernández-Rodicio ◽  
Víctor González-Pacheco ◽  
José Carlos Castillo ◽  
Álvaro Castro-González ◽  
María Malfaz ◽  
...  

Projectors have become a widespread tool to share information in Human-Robot Interaction with large groups of people in a comfortable way. Finding a suitable vertical surface becomes a problem when the projector changes positions when a mobile robot is looking for suitable surfaces to project. Two problems must be addressed to achieve a correct undistorted image: (i) finding the biggest suitable surface free from obstacles and (ii) adapting the output image to correct the distortion due to the angle between the robot and a nonorthogonal surface. We propose a RANSAC-based method that detects a vertical plane inside a point cloud. Then, inside this plane, we apply a rectangle-fitting algorithm over the region in which the projector can work. Finally, the algorithm checks the surface looking for imperfections and occlusions and transforms the original image using a homography matrix to display it over the area detected. The proposed solution can detect projection areas in real-time using a single Kinect camera, which makes it suitable for applications where a robot interacts with other people in unknown environments. Our Projection Surfaces Detector and the Image Correction module allow a mobile robot to find the right surface and display images without deformation, improving its ability to interact with people.

2019 ◽  
Vol 374 (1771) ◽  
pp. 20180036 ◽  
Author(s):  
Cesco Willemse ◽  
Agnieszka Wykowska

Initiating joint attention by leading someone's gaze is a rewarding experience which facilitates social interaction. Here, we investigate this experience of leading an agent's gaze while applying a more realistic paradigm than traditional screen-based experiments. We used an embodied robot as our main stimulus and recorded participants' eye movements. Participants sat opposite a robot that had either of two ‘identities’—‘Jimmy’ or ‘Dylan’. Participants were asked to look at either of two objects presented on screens to the left and the right of the robot. Jimmy then looked at the same object in 80% of the trials and at the other object in the remaining 20%. For Dylan, this proportion was reversed. Upon fixating on the object of choice, participants were asked to look back at the robot's face. We found that return-to-face saccades were conducted earlier towards Jimmy when he followed the gaze compared with when he did not. For Dylan, there was no such effect. Additional measures indicated that our participants also preferred Jimmy and liked him better. This study demonstrates (a) the potential of technological advances to examine joint attention where ecological validity meets experimental control, and (b) that social reorienting is enhanced when we initiate joint attention. This article is part of the theme issue ‘From social brains to social robots: applying neurocognitive insights to human–robot interaction’.


Robotics ◽  
2010 ◽  
Author(s):  
N. Elkmann ◽  
E. Schulenburg ◽  
M. Fritzsche

Robotica ◽  
2014 ◽  
Vol 33 (1) ◽  
pp. 1-18 ◽  
Author(s):  
Alberto Poncela ◽  
Leticia Gallardo-Estrella

SUMMARYVerbal communication is the most natural way of human–robot interaction. Such an interaction is usually achieved by means of a human-robot interface (HRI). In this paper, a HRI is presented to teleoperate a robotic platform via the user's voice. Hence, a speech recognition system is necessary. In this work, a user-dependent acoustic model for Spanish speakers has been developed to teleoperate a robot with a set of commands. Experimental results have been successful, both in terms of a high recognition rate and the navigation of the robot under the control of the user's voice.


2007 ◽  
Vol 8 (3) ◽  
pp. 363-390 ◽  
Author(s):  
Peter H. Kahn ◽  
Hiroshi Ishiguro ◽  
Batya Friedman ◽  
Takayuki Kanda ◽  
Nathan G. Freier ◽  
...  

In this paper, we move toward offering psychological benchmarks to measure success in building increasingly humanlike robots. By psychological benchmarks we mean categories of interaction that capture conceptually fundamental aspects of human life, specified abstractly enough to resist their identity as a mere psychological instrument, but capable of being translated into testable empirical propositions. Nine possible benchmarks are considered: autonomy, imitation, intrinsic moral value, moral accountability, privacy, reciprocity, conventionality, creativity, and authenticity of relation. Finally, we discuss how getting the right group of benchmarks in human–robot interaction will, in future years, help inform on the foundational question of what constitutes essential features of being human.


Author(s):  
Yasutake Takahashi ◽  
◽  
Kyohei Yoshida ◽  
Fuminori Hibino ◽  
Yoichiro Maeda

Human-robot interaction requires intuitive interface that is not possible using devices, such as, the joystick or teaching pendant, which also require some trainings. Instruction by gesture is one example of an intuitive interfaces requiring no training, and pointing is one of the simplest gestures. We propose simple pointing recognition for a mobile robot having an upwarddirected camera system. The robot using this recognizes pointing and navigates through simple visual feedback control to where the user points. This paper explores the feasibility and utility of our proposal as shown by the results of a questionnaire on proposed and conventional interfaces.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4586 ◽  
Author(s):  
Chunxu Li ◽  
Ashraf Fahmy ◽  
Johann Sienz

In this paper, the application of Augmented Reality (AR) for the control and adjustment of robots has been developed, with the aim of making interaction and adjustment of robots easier and more accurate from a remote location. A LeapMotion sensor based controller has been investigated to track the movement of the operator hands. The data from the controller allows gestures and the position of the hand palm’s central point to be detected and tracked. A Kinect V2 camera is able to measure the corresponding motion velocities in x, y, z directions after our investigated post-processing algorithm is fulfilled. Unreal Engine 4 is used to create an AR environment for the user to monitor the control process immersively. Kalman filtering (KF) algorithm is employed to fuse the position signals from the LeapMotion sensor with the velocity signals from the Kinect camera sensor, respectively. The fused/optimal data are sent to teleoperate a Baxter robot in real-time by User Datagram Protocol (UDP). Several experiments have been conducted to test the validation of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document