scholarly journals Image Processing Methods for Gesture-Based Robot Control

Author(s):  
Abdelouahab Zaatri ◽  
Hamama Aboud

Abstract In this paper we discuss some image processing methods that can be used for motion recognition of human body parts such as hands or arms in order to interact with robots. This interaction is usually associated to gesture-based control. The considered image processing methods have been experienced for feature recognition in applications involving human robot interaction. They are namely: Sequential Similarity Detection Algorithm (SSDA), an appearance-based approach that uses image databases to model objects, and Kanade-Lucas-Tomasi (KLT) algorithm which is usually used for feature tracking. We illustrate the gesture-based interaction by using KLT algorithm. We discuss the adaptation of each of these methods to the context of gesture-based robot interaction and some of their related issues.

Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6529
Author(s):  
Masaya Iwasaki ◽  
Mizuki Ikeda ◽  
Tatsuyuki Kawamura ◽  
Hideyuki Nakanishi

Robotic salespeople are often ignored by people due to their weak social presence, and thus have difficulty facilitating sales autonomously. However, for robots that are remotely controlled by humans, there is a need for experienced and trained operators. In this paper, we suggest crowdsourcing to allow general users on the internet to operate a robot remotely and facilitate customers’ purchasing activities while flexibly responding to various situations through a user interface. To implement this system, we examined how our developed remote interface can improve a robot’s social presence while being controlled by a human operator, including first-time users. Therefore, we investigated the typical flow of a customer–robot interaction that was effective for sales promotion, and modeled it as a state transition with automatic functions by accessing the robot’s sensor information. Furthermore, we created a user interface based on the model and examined whether it was effective in a real environment. Finally, we conducted experiments to examine whether the user interface could be operated by an amateur user and enhance the robot’s social presence. The results revealed that our model was able to improve the robot’s social presence and facilitate customers’ purchasing activity even when the operator was a first-time user.


2011 ◽  
Vol 23 (4) ◽  
pp. 557-566 ◽  
Author(s):  
Vincent Duchaine ◽  
◽  
Clément Gosselin ◽  

While the majority of industrial manipulators currently in use only need to performautonomousmotion, future generations of cooperative robots will also have to execute cooperative motion and intelligently react to contacts. These extended behaviours are essential to enable safe and effective physical Human-Robot Interaction (pHRI). However, they will inevitably result in an increase of the controller complexity. This paper presents a single variable admittance control scheme that handles the three modes of operation, thereby minimizing the complexity of the controller. First, the adaptative admittance controller previously proposed by the authors for cooperative motion is recalled. Then, a novel implementation of variable admittance control for the generation of smooth autonomous motion including reaction to collisions anywhere on the robot is presented. Finally, it is shown how the control equations for these three modes of operation can be simply unified into a unique control scheme.


2014 ◽  
Vol 11 (04) ◽  
pp. 1442005 ◽  
Author(s):  
Youngho Lee ◽  
Young Jae Ryoo ◽  
Jongmyung Choi

With the development of computing technology, robots are now popular in our daily life. Human–robot interaction is not restricted to a direct communication between them. The communication could include various different human to human interactions. In this paper, we present a framework for enhancing the interaction among human–robot-environments. The proposed framework is composed of a robot part, a user part, and the DigiLog space. To evaluate the proposed framework, we applied the framework into a real-time remote robot-control platform in the smart DigiLog space. We are implementing real time controlling and monitoring of a robot by using one smart phone as the robot brain and the other smart phone as the remote controller.


Robotics ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 113
Author(s):  
Diogo Carneiro ◽  
Filipe Silva ◽  
Petia Georgieva

Catching flying objects is a challenging task in human–robot interaction. Traditional techniques predict the intersection position and time using the information obtained during the free-flying ball motion. A common pain point in these systems is the short ball flight time and uncertainties in the ball’s trajectory estimation. In this paper, we present the Robot Anticipation Learning System (RALS) that accounts for the information obtained from observation of the thrower’s hand motion before the ball is released. RALS takes extra time for the robot to start moving in the direction of the target before the opponent finishes throwing. To the best of our knowledge, this is the first robot control system for ball-catching with anticipation skills. Our results show that the information fused from both throwing and flying motions improves the ball-catching rate by up to 20% compared to the baseline approach, with the predictions relying only on the information acquired during the flight phase.


2020 ◽  
Vol 44 (8) ◽  
pp. 1411-1429
Author(s):  
Mahdi Khoramshahi ◽  
Aude Billard

Abstract A seamless interaction requires two robotic behaviors: the leader role where the robot rejects the external perturbations and focuses on the autonomous execution of the task, and the follower role where the robot ignores the task and complies with human intentional forces. The goal of this work is to provide (1) a unified robotic architecture to produce these two roles, and (2) a human-guidance detection algorithm to switch across the two roles. In the absence of human-guidance, the robot performs its task autonomously and upon detection of such guidances the robot passively follows the human motions. We employ dynamical systems to generate task-specific motion and admittance control to generate reactive motions toward the human-guidance. This structure enables the robot to reject undesirable perturbations, track the motions precisely, react to human-guidance by providing proper compliant behavior, and re-plan the motion reactively. We provide analytical investigation of our method in terms of tracking and compliant behavior. Finally, we evaluate our method experimentally using a 6-DoF manipulator.


Author(s):  
Prashant K. Jamwal ◽  
Sheng Quan Xie ◽  
Sean Quigley

Variants of Fuzzy logic controllers (FLC) have been widely used to control the systems characterized by uncertain and ambiguous parameters. Control objectives for such systems become more challenging when they are subjected to uncertain environments. Human-robot interaction is such phenomenon wherein robot control difficulties are further augmented due to human intervention. State of the art of research in FLC has been limited in establishing a trade-off between accuracy and interpretability, since achieving both these performance measures simultaneously is difficult. In the present research, an adaptive FLC has been designed in order to achieve better accuracy and higher interpretability. Supported by another instance of FLC as disturbance observer, the proposed controller has adaptive mechanism specifically designed to alter its parameters. The adaptive FLC has been implemented to control actuation of a pneumatic muscle actuator (PMA). Experimental results show excellent trajectory tracking performance of the PMA in the presence of varying environment.


Sign in / Sign up

Export Citation Format

Share Document