scholarly journals Advanced human-robot interaction system based on human communication recognition

Author(s):  
Adrian Bulzacki

This work describes the implementation of various human-robot interaction systems in a functioning mobile robot. This project is the result of integrating a tracking system for human faces and objects, face recognition, gesture recognition, body tracking, stereo-vision, speech synthesis, and voice recognition. The majority of these systems are custom designed for this particular project, these systems and how they were designed are explained in detail throughout this report. A unique vector-based approach is used for gesture recognition. There is minimal focus on the mechanics and electronics of the human-robot interaction system, but rather on the information processing of the robot. Using combinations of many information processing systems will allow robots to interact with human users more naturally, and will provide a natural conduit for future cooperative human-robot efforts. This project lays the groundwork for what will be a large collaborative effort aimed at creating possibly one of the most advanced human interactive robot in the world.

2021 ◽  
Author(s):  
Adrian Bulzacki

This work describes the implementation of various human-robot interaction systems in a functioning mobile robot. This project is the result of integrating a tracking system for human faces and objects, face recognition, gesture recognition, body tracking, stereo-vision, speech synthesis, and voice recognition. The majority of these systems are custom designed for this particular project, these systems and how they were designed are explained in detail throughout this report. A unique vector-based approach is used for gesture recognition. There is minimal focus on the mechanics and electronics of the human-robot interaction system, but rather on the information processing of the robot. Using combinations of many information processing systems will allow robots to interact with human users more naturally, and will provide a natural conduit for future cooperative human-robot efforts. This project lays the groundwork for what will be a large collaborative effort aimed at creating possibly one of the most advanced human interactive robot in the world.


2019 ◽  
Vol 2019 ◽  
pp. 1-12
Author(s):  
José Carlos Castillo ◽  
Fernando Alonso-Martín ◽  
David Cáceres-Domínguez ◽  
María Malfaz ◽  
Miguel A. Salichs

Human communication relies on several aspects beyond the speech. One of them is gestures as they express intentions, interests, feelings, or ideas and complement the speech. Social robots need to interpret these messages to allow a more natural Human-Robot Interaction. In this sense, our aim is to study the effect of position and speed features in dynamic gesture recognition. We use 3D information to extract the user’s skeleton and calculate the normalized position for all of its joints, and using the temporal variation of such positions, we calculate their speeds. Our three datasets are composed of 1355 samples from 30 users. We consider 14 common gestures in HRI involving upper body movements. A set of classification techniques is evaluated testing these three datasets to find what features perform better. Results indicate that the union of both speed and position achieves the best results among the three possibilities, 0.999 of F-score. The combination that performs better to detect dynamic gestures in real time is finally integrated in our social robot with a simple HRI application to run a proof of concept test to check how the proposal behaves in a realistic scenario.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Partha Chakraborty ◽  
Sabbir Ahmed ◽  
Mohammad Abu Yousuf ◽  
Akm Azad ◽  
Salem A. Alyami ◽  
...  

10.5772/60416 ◽  
2015 ◽  
Vol 12 (5) ◽  
pp. 57 ◽  
Author(s):  
Ludovico Orlando Russo ◽  
Giuseppe Airò Farulla ◽  
Daniele Pianu ◽  
Alice Rita Salgarella ◽  
Marco Controzzi ◽  
...  

Author(s):  
Stefan Schiffer ◽  
Alexander Ferrein

In this work we report on our effort to design and implement an early introduction to basic robotics principles for children at kindergarten age.  The humanoid robot Pepper, which is a great platform for human-robot interaction experiments, was presenting the lecture by reading out the contents to the children making use of its speech synthesis capability.  One of the main challenges of this effort was to explain complex robotics contents in a way that pre-school children could follow the basic principles and ideas using examples from their world of experience. A quiz in a Runaround-game-show style after the lecture activated the children to recap the contents  they acquired about how mobile robots work in principle. Besides the thrill being exposed to a mobile robot that would also react to the children, they were very excited and at the same time very concentrated. What sets apart our effort from other work is that part of the lecturing is actually done by a robot itself and that a quiz at the end of the lesson is done using robots as well. To the best of our knowledge this is one of only few attempts to use Pepper not as a tele-teaching tool, but as the teacher itself in order to engage pre-school children with complex robotics contents. We  got very positive feedback from the children as well as from their educators.


Sign in / Sign up

Export Citation Format

Share Document