Improving Image-Based Visual Servoing with Three-Dimensional Features

2003 ◽  
Vol 22 (10-11) ◽  
pp. 821-839 ◽  
Author(s):  
E. Cervera ◽  
A. P. del Pobil ◽  
F. Berry ◽  
P. Martinet
2020 ◽  
Vol 8 (4) ◽  
Author(s):  
Danming Wei ◽  
Mariah B. Hall ◽  
Andriy Sherehiy ◽  
Dan O. Popa

Abstract Microassembly systems utilizing precision robotics have long been used for realizing three-dimensional microstructures such as microsystems and microrobots. Prior to assembly, microscale components are fabricated using micro-electromechanical-system (MEMS) technology. The microassembly system then directs a microgripper through a series of automated or human-controlled pick-and-place operations. In this paper, we describe a novel custom microassembly system, named NEXUS, that can be used to prototype MEMS microrobots. The NEXUS integrates multi-degrees-of-freedom (DOF) precision positioners, microscope computer vision, and microscale process tools such as a microgripper and vacuum tip. A semi-autonomous human–machine interface (HMI) was programmed to allow the operator to interact with the microassembly system. The NEXUS human–machine interface includes multiple functions, such as positioning, target detection, visual servoing, and inspection. The microassembly system's HMI was used by operators to assemble various three-dimensional microrobots such as the Solarpede, a novel light-powered stick-and-slip mobile microcrawler. Experimental results are reported in this paper to evaluate the system's semi-autonomous capabilities in terms of assembly rate and yield and compare them to purely teleoperated assembly performance. Results show that the semi-automated capabilities of the microassembly system's HMI offer a more consistent assembly rate of microrobot components and are less reliant on the operator's experience and skill.


Author(s):  
Zhenyu Li ◽  
Bin Wang ◽  
Haitao Yang ◽  
Hong Liu

Purpose Rapid satellite capture by a free-floating space robot is a challenge problem because of no-fixed base and time-delay issues. This paper aims to present a modified target capturing control scheme for improving the control performance. Design/methodology/approach For handling such control problem including time delay, the modified scheme is achieved by adding a delay calibration algorithm into the visual servoing loop. To identify end-effector motions in real time, a motion predictor is developed by partly linearizing the space robot kinematics equation. By this approach, only ground-fixed robot kinematics are involved in the predicting computation excluding the complex space robot kinematics calculations. With the newly developed predictor, a delay compensator is designed to take error control into account. For determining the compensation parameters, the asymptotic stability condition of the proposed compensation algorithm is also presented. Findings The proposed method is conducted by a credible three-dimensional ground experimental system, and the experimental results illustrate the effectiveness of the proposed method. Practical implications Because the delayed camera signals are compensated with only ground-fixed robot kinematics, this proposed satellite capturing scheme is particularly suitable for commercial on-orbit services with cheaper on-board computers. Originality/value This paper is original as an attempt trying to compensate the time delay by taking both space robot motion predictions and compensation error control into consideration and is valuable for rapid and accurate satellite capture tasks.


Author(s):  
Simon Leonard ◽  
Ambrose Chan ◽  
Elizabeth Croft ◽  
James J. Little

This paper discusses work towards a vision-based solution to the problem of robot bin-picking. The problem of robot bin-picking is defined as searching for and recognizing a part among many lying jumbled in a bin such that the robot is able to grasp and manipulate the part. Despite decades of research in vision, robotics, and manufacturing, this problem remains open. Currently, in modern manufacturing, this seemingly simple task is performed by complex assembly lines or manual labor. The amount of efforts and costs associated with the current solutions to bin-picking is a testament to the importance of a new solution. The main objective of this research is a reliable and cost effective automated solution to the bin-picking problem encountered in manufacturing. As a broader contribution, this research also provides a robust visual servoing method that enables safe interactions between a robot and its environment. Our system uses visual feedback to generate tasks autonomously and to control the interaction of the manipulator with its environment. First, our system relies on robust vision-based object localization to generate three-dimensional pose hypotheses for each identified part. Then, the hypotheses are filtered according to the feasibility of their picking configuration. Finally, a trajectory is generated for a picking position. In this paper, we consider the specifications of the trajectory ensure that collisions with the bin and joints limits are avoided, while servoing the robot to the part. To ensure the reliability of the system, the procedure is tested in a simulation before being executed by a manipulator. Our experiments target the automotive industry and involve real engine parts a typical industrial robot and metal bin.


Author(s):  
Alireza Rastegarpanah ◽  
Ali Aflakian ◽  
Rustam Stolkin

This study proposes an optimized hybrid visual servoing approach to overcome the imperfections of classical two-dimensional, three-dimensional and hybrid visual servoing methods. These imperfections are mostly convergence issues, non-optimized trajectories, expensive calculations and singularities. The proposed method provides more efficient optimized trajectories with shorter camera path for the robot than image-based and classical hybrid visual servoing methods. Moreover, it is less likely to lose the object from the camera field of view, and it is more robust to camera calibration than the classical position-based and hybrid visual servoing methods. The drawbacks in two-dimensional visual servoing are mostly related to the camera retreat and rotational motions. To tackle these drawbacks, rotations and translations in Z-axis have been separately controlled from three-dimensional estimation of the visual features. The pseudo-inverse of the proposed interaction matrix is approximated by a neuro-fuzzy neural network called local linear model tree. Using local linear model tree, the controller avoids the singularities and ill-conditioning of the proposed interaction matrix and makes it robust to image noises and camera parameters. The proposed method has been compared with classical image-based, position-based and hybrid visual servoing methods, both in simulation and in the real world using a 7-degree-of-freedom arm robot.


2015 ◽  
Vol 54 (1) ◽  
pp. 013106 ◽  
Author(s):  
Xiaopeng Sha ◽  
Huiguang Li ◽  
Wenchao Li ◽  
Shuai Wang

2015 ◽  
Vol 2015 (0) ◽  
pp. _2A2-E08_1-_2A2-E08_4
Author(s):  
Kenta YONEMORI ◽  
Akira YANOU ◽  
Shota OHNISHI ◽  
Mamoru MINAMI ◽  
Katsuki FUJIMOTO ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5271
Author(s):  
Di Fan ◽  
Yanyang Liu ◽  
Xiaopeng Chen ◽  
Fei Meng ◽  
Xilong Liu ◽  
...  

Three-dimensional (3D) triangulation based on active binocular vision has increasing amounts of applications in computer vision and robotics. An active binocular vision system with non-fixed cameras needs to calibrate the stereo extrinsic parameters online to perform 3D triangulation. However, the accuracy of stereo extrinsic parameters and disparity have a significant impact on 3D triangulation precision. We propose a novel eye gaze based 3D triangulation method that does not use stereo extrinsic parameters directly in order to reduce the impact. Instead, we drive both cameras to gaze at a 3D spatial point P at the optical center through visual servoing. Subsequently, we can obtain the 3D coordinates of P through the intersection of the two optical axes of both cameras. We have performed experiments to compare with previous disparity based work, named the integrated two-pose calibration (ITPC) method, using our robotic bionic eyes. The experiments show that our method achieves comparable results with ITPC.


Sign in / Sign up

Export Citation Format

Share Document