Positioning and Trajectory Following Tasks in Microsystems Using Model Free Visual Servoing

Author(s):  
Erol Ozgur ◽  
Mustafa Unel
2016 ◽  
Vol 24 (4) ◽  
pp. 1328-1339 ◽  
Author(s):  
Baoquan Li ◽  
Yongchun Fang ◽  
Guoqiang Hu ◽  
Xuebo Zhang

2020 ◽  
Vol 39 (14) ◽  
pp. 1739-1759 ◽  
Author(s):  
Andrea Cherubini ◽  
Valerio Ortenzi ◽  
Akansel Cosgun ◽  
Robert Lee ◽  
Peter Corke

We address the problem of shaping deformable plastic materials using non-prehensile actions. Shaping plastic objects is challenging, because they are difficult to model and to track visually. We study this problem, by using kinetic sand, a plastic toy material that mimics the physical properties of wet sand. Inspired by a pilot study where humans shape kinetic sand, we define two types of actions: pushing the material from the sides and tapping from above. The chosen actions are executed with a robotic arm using image-based visual servoing. From the current and desired view of the material, we define states based on visual features such as the outer contour shape and the pixel luminosity values. These are mapped to actions, which are repeated iteratively to reduce the image error until convergence is reached. For pushing, we propose three methods for mapping the visual state to an action. These include heuristic methods and a neural network, trained from human actions. We show that it is possible to obtain simple shapes with the kinetic sand, without explicitly modeling the material. Our approach is limited in the types of shapes it can achieve. A richer set of action types and multi-step reasoning is needed to achieve more sophisticated shapes.


Robotica ◽  
2007 ◽  
Vol 25 (5) ◽  
pp. 615-626 ◽  
Author(s):  
Wen-Chung Chang

SUMMARYRobotic manipulators that have interacted with uncalibrated environments typically have limited positioning and tracking capabilities, if control tasks cannot be appropriately encoded using available features in the environments. Specifically, to perform 3-D trajectory following operations employing binocular vision, it seems necessary to have a priori knowledge on pointwise correspondence information between two image planes. However, such an assumption cannot be made for any smooth 3-D trajectories. This paper describes how one might enhance autonomous robotic manipulation for 3-D trajectory following tasks using eye-to-hand binocular visual servoing. Based on a novel encoded error, an image-based feedback control law is proposed without assuming pointwise binocular correspondence information. The proposed control approach can guarantee task precision by employing only an approximately calibrated binocular vision system. The goal of the autonomous task is to drive a tool mounted on the end-effector of the robotic manipulator to follow a visually determined smooth 3-D target trajectory in desired speed with precision. The proposed control architecture is suitable for applications that require precise 3-D positioning and tracking in unknown environments. Our approach is successfully validated in a real task environment by performing experiments with an industrial robotic manipulator.


2020 ◽  
Vol 5 (4) ◽  
pp. 5252-5259
Author(s):  
Romain Lagneau ◽  
Alexandre Krupa ◽  
Maud Marchal

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 21539-21558 ◽  
Author(s):  
Keyu Wu ◽  
Guoniu Zhu ◽  
Liao Wu ◽  
Wenchao Gao ◽  
Shuang Song ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document