robotic teleoperation
Recently Published Documents


TOTAL DOCUMENTS

71
(FIVE YEARS 27)

H-INDEX

7
(FIVE YEARS 2)

Actuators ◽  
2022 ◽  
Vol 11 (1) ◽  
pp. 24
Author(s):  
Guan-Yang Liu ◽  
Yi Wang ◽  
Chao Huang ◽  
Chen Guan ◽  
Dong-Tao Ma ◽  
...  

The goal of haptic feedback in robotic teleoperation is to enable users to accurately feel the interaction force measured at the slave side and precisely understand what is happening in the slave environment. The accuracy of the feedback force describing the error between the actual feedback force felt by a user at the master side and the measured interaction force at the slave side is the key performance indicator for haptic display in robotic teleoperation. In this paper, we evaluate the haptic feedback accuracy in robotic teleoperation via experimental method. A special interface iHandle and two haptic devices, iGrasp-T and iGrasp-R, designed for robotic teleoperation are developed for experimental evaluation. The device iHandle integrates a high-performance force sensor and a micro attitude and heading reference system which can be used to identify human upper limb motor abilities, such as posture maintenance and force application. When a user is asked to grasp the iHandle and maintain a fixed position and posture, the fluctuation value of hand posture is measured to be between 2 and 8 degrees. Based on the experimental results, human hand tremble as input noise sensed by the haptic device is found to be a major reason that results in the noise of output force from haptic device if the spring-damping model is used to render feedback force. Therefore, haptic rendering algorithms should be independent of hand motion information to avoid input noise from human hand to the haptic control loop in teleoperation. Moreover, the iHandle can be fixed at the end effector of haptic devices; iGrasp-T or iGrasp-R, to measure the output force/torque from iGrasp-T or iGrasp-Rand to the user. Experimental results show that the accuracy of the output force from haptic device iGrasp-T is approximately 0.92 N, and using the force sensor in the iHandle can compensate for the output force inaccuracy of device iGrasp-T to 0.1 N. Using a force sensor as the feedback link to form a closed-loop feedback force control system is an effective way to improve the accuracy of feedback force and guarantee high-fidelity of feedback forces at the master side in robotic teleoperation.


2021 ◽  
Author(s):  
Bestin Antu ◽  
T Amal ◽  
Janet Joby ◽  
Retty George

2021 ◽  
Vol 50 ◽  
pp. 101431
Author(s):  
Pooya Adami ◽  
Patrick B. Rodrigues ◽  
Peter J. Woods ◽  
Burcin Becerik-Gerber ◽  
Lucio Soibelman ◽  
...  

2021 ◽  
Author(s):  
David Black ◽  
Yas Oloumi Yazdi ◽  
Amir Hossein Hadi Hosseinabadi ◽  
Septimiu Salcudean

<div> <div> <div> <p>Current teleguidance methods include verbal guidance and robotic teleoperation, which present tradeoffs between precision and latency versus flexibility and cost. We present a novel concept of "human teleoperation" which bridges the gap between these two methods. A prototype teleultrasound system was implemented which shows the concept’s efficacy. An expert remotely "teloperates" a person (the follower) wearing a mixed reality headset by controlling a virtual ultrasound probe projected into the person’s scene. The follower matches the pose and force of the virtual device with a real probe. The pose, force, video, ultrasound images, and 3-dimensional mesh of the scene are fed back to the expert. In this control framework, the input and the actuation are carried out by people, but with near robot-like latency and precision. This allows teleguidance that is more precise and fast than verbal guidance, yet more flexible and inexpensive than robotic teleoperation. The system was subjected to tests that show its effectiveness, including mean teleoperation latencies of 0.27 seconds and errors of 7 mm and 6◦ in pose tracking. The system was also tested with an expert ultrasonographer and four patients and was found to improve the precision and speed of two teleultrasound procedures. </p> </div> </div> </div>


2021 ◽  
Author(s):  
David Black ◽  
Yas Oloumi Yazdi ◽  
Amir Hossein Hadi Hosseinabadi ◽  
Septimiu Salcudean

<div> <div> <div> <p>Current teleguidance methods include verbal guidance and robotic teleoperation, which present tradeoffs between precision and latency versus flexibility and cost. We present a novel concept of "human teleoperation" which bridges the gap between these two methods. A prototype teleultrasound system was implemented which shows the concept’s efficacy. An expert remotely "teloperates" a person (the follower) wearing a mixed reality headset by controlling a virtual ultrasound probe projected into the person’s scene. The follower matches the pose and force of the virtual device with a real probe. The pose, force, video, ultrasound images, and 3-dimensional mesh of the scene are fed back to the expert. In this control framework, the input and the actuation are carried out by people, but with near robot-like latency and precision. This allows teleguidance that is more precise and fast than verbal guidance, yet more flexible and inexpensive than robotic teleoperation. The system was subjected to tests that show its effectiveness, including mean teleoperation latencies of 0.27 seconds and errors of 7 mm and 6◦ in pose tracking. The system was also tested with an expert ultrasonographer and four patients and was found to improve the precision and speed of two teleultrasound procedures. </p> </div> </div> </div>


2021 ◽  
Vol 11 (16) ◽  
pp. 7190
Author(s):  
Sana Baklouti ◽  
Guillaume Gallot ◽  
Julien Viaud ◽  
Kevin Subrin

This paper deals with Yaskawa robots controlling the Robot Operating System (ROS) for teleoperation tasks. The integration of an open-source ROS interface based on standard Motoman packages into control loop leads to large trajectory tracking errors and latency, which are unsuitable for robotic teleoperation. An improved version of the standard ROS-based control is proposed by adding a new velocity control mode into the standard Motoman ROS driver. These two approaches are compared in terms of response time and tracking delay. Investigations applied on the Yaskawa GP8 robot while using the proposed improved ROS-based control confirmed trajectory tracking and latency improvements, which can achieve 43% with respect to standard control.


2021 ◽  
Author(s):  
Matteo Macchini ◽  
Fabrizio Schiano ◽  
Dario Floreano

Abstract Body-Machine Interfaces (BoMIs) for robotic teleoperation can improve a user’s experience and performance. However, the implementation of such systems needs to be optimized on each robot independently, as a general approach has not been proposed to date. Here, we present a novel machine learning method to generate personalized BoMIs from an operator’s spontaneous body movements. The method captures individual motor synergies that can be used for the teleoperation of robots. The proposed algorithm applies to people with diverse behavioral patterns to control robots with diverse morphologies and degrees of freedom, such as a fixed-wing drone, a quadrotor, and a robotic manipulator.


2021 ◽  
Vol 126 ◽  
pp. 103674
Author(s):  
Qi Zhu ◽  
Jing Du ◽  
Yangming Shi ◽  
Paul Wei

2021 ◽  
Author(s):  
Bowen Xie ◽  
Mingjie Han ◽  
Jun Jin ◽  
Martin Barczyk ◽  
Martin Jagersand

Sign in / Sign up

Export Citation Format

Share Document