Dynamic IBVS of a rotary wing UAV using line features

Robotica ◽  
2014 ◽  
Vol 34 (9) ◽  
pp. 2009-2026 ◽  
Author(s):  
Hui Xie ◽  
Alan F. Lynch ◽  
Martin Jagersand

SUMMARYIn this paper we propose a dynamic image-based visual servoing (IBVS) control for a rotary wing unmanned aerial vehicle (UAV) which directly accounts for the vehicle's underactuated dynamic model. The motion control objective is to follow parallel lines and is motivated by power line inspection tasks where the UAV's relative position and orientation to the lines are controlled. The design is based on a virtual camera whose motion follows the onboard physical camera but which is constrained to point downwards independent of the vehicle's roll and pitch angles. A set of image features is proposed for the lines projected into the virtual camera frame. These features are chosen to simplify the interaction matrix which in turn leads to a simpler IBVS control design which is globally asymptotically stable. The proposed scheme is adaptive and therefore does not require depth estimation. Simulation results are presented to illustrate the performance of the proposed control and its robustness to calibration parameter error.

2017 ◽  
Vol 05 (01) ◽  
pp. 1-17 ◽  
Author(s):  
Geoff Fink ◽  
Hui Xie ◽  
Alan F. Lynch ◽  
Martin Jagersand

This paper presents a dynamic image-based visual servoing (IBVS) control law for a quadrotor unmanned aerial vehicle (UAV) equipped with a single fixed on-board camera. The motion control problem is to regulate the relative position and yaw of the vehicle to a moving planar target located within the camera’s field of view. The control law is termed dynamic as it’s based on the dynamics of the vehicle. To simplify the kinematics and dynamics, the control law relies on the notion of a virtual camera and image moments as visual features. The convergence of the closed-loop is proven to be globally asymptotically stable for a horizontal target. In the case of nonhorizontal targets, we modify the control using a homography decomposition. Experimental and simulation results demonstrate the control law’s performance.


2019 ◽  
Vol 40 (6) ◽  
pp. 819-831
Author(s):  
Chicheng Liu ◽  
Libin Song ◽  
Ken Chen ◽  
Jing Xu

Purpose This paper aims to present an image-based visual servoing algorithm for a multiple pin-in-hole assembly. This paper also aims to avoid the matching and tracking of image features and the remaining robust against image defects. Design/methodology/approach The authors derive a novel model in the set space and design three image errors to control the 3 degrees of freedom (DOF) of a single-lug workpiece in the alignment task. Analytic computations of the interaction matrix that link the time variations of the image errors to the single-lug workpiece motions are performed. The authors introduce two approximate hypotheses so that the interaction matrix has a decoupled form, and an auto-adaptive algorithm is designed to estimate the interaction matrix. Findings Image-based visual servoing in the set space avoids the matching and tracking of image features, and these methods are not sensitive to image effects. The control law using the auto-adaptive algorithm is more efficient than that using a static interaction matrix. Simulations and real-world experiments are performed to demonstrate the effectiveness of the proposed algorithm. Originality/value This paper proposes a new visual servoing method to achieve pin-in-hole assembly tasks. The main advantage of this new approach is that it does not require tracking or matching of the image features, and its supplementary advantage is that it is not sensitive to image defects.


2020 ◽  
Author(s):  
Fuyuki Tokuda ◽  
Shogo Arai ◽  
Kazuhiro Kosuge

We propose a CNN based visual servoing scheme for precise positioning of an eye-to-hand manipulator in which the control input of a robot is calculated directly from images by a neural network. In this paper, we propose Difference of Encoded Features driven Interaction matrix Network (DEFINet), a new convolutional neural network (CNN), for eye-to-hand visual servoing. DEFINet estimates a relative pose between desired and current end-effector from desired and current images captured by an eye-to-hand camera. DEFINet includes two branches of the same CNN that share weights and encode target and current images, which is inspired by the architecture of Siamese network. Regression of the relative pose from the difference of the encoded target and current image features leads to a high positioning accuracy of visual servoing using DEFINet. The training dataset is generated from sample data collected by operating a manipulator randomly in task space. The performance of the proposed visual servoing is evaluated through numerical simulation and experiments using a six-DOF industrial manipulator in a real environment. Both simulation and experimental results show the effectiveness of the proposed method.<br>


2020 ◽  
Author(s):  
Fuyuki Tokuda ◽  
Shogo Arai ◽  
Kazuhiro Kosuge

We propose a CNN based visual servoing scheme for precise positioning of an eye-to-hand manipulator in which the control input of a robot is calculated directly from images by a neural network. In this paper, we propose Difference of Encoded Features driven Interaction matrix Network (DEFINet), a new convolutional neural network (CNN), for eye-to-hand visual servoing. DEFINet estimates a relative pose between desired and current end-effector from desired and current images captured by an eye-to-hand camera. DEFINet includes two branches of the same CNN that share weights and encode target and current images, which is inspired by the architecture of Siamese network. Regression of the relative pose from the difference of the encoded target and current image features leads to a high positioning accuracy of visual servoing using DEFINet. The training dataset is generated from sample data collected by operating a manipulator randomly in task space. The performance of the proposed visual servoing is evaluated through numerical simulation and experiments using a six-DOF industrial manipulator in a real environment. Both simulation and experimental results show the effectiveness of the proposed method.<br>


Sensors ◽  
2020 ◽  
Vol 20 (12) ◽  
pp. 3474 ◽  
Author(s):  
Shijie Zhang ◽  
Xiangtian Zhao ◽  
Botian Zhou

This paper investigates the problem of using an unmanned aerial vehicle (UAV) to track and hover above an uncooperative target, such as an unvisited area or an object that is newly discovered. A vision-based strategy integrating the metrology and the control is employed to achieve target tracking and hovering observation. First, by introducing a virtual camera frame, the reprojected image features can change independently of the rotational motion of the vehicle. The image centroid and an optimal observation area on the virtual image plane are exploited to regulate the relative horizontal and vertical distance. Then, the optic flow and gyro measurements are utilized to estimate the relative UAV-to-target velocity. Further, a gain-switching proportional-derivative (PD) control scheme is proposed to compensate for the external interference and model uncertainties. The closed-loop system is proven to be exponentially stable, based on the Lyapunov method. Finally, simulation results are presented to demonstrate the effectiveness of the proposed vision-based strategy in both hovering and tracking scenarios.


2021 ◽  
pp. 106891
Author(s):  
Chengbin Chen ◽  
Sifan Chen ◽  
Guangsheng Hu ◽  
Baihe Chen ◽  
Pingping Chen ◽  
...  

2012 ◽  
Vol 162 ◽  
pp. 487-496 ◽  
Author(s):  
Aurelien Yeremou Tamtsia ◽  
Youcef Mezouar ◽  
Philippe Martinet ◽  
Haman Djalo ◽  
Emmanuel Tonye

Among region-based descriptors, geometric moments have been widely exploited to design visual servoing schemes. However, they present several disadvantages such as high sensitivity to noise measurement, high dynamic range and information redundancy (since they are not computed onto orthogonal basis). In this paper, we propose to use a class of orthogonal moments (namely Legendre moments) instead of geometric moments to improve the behavior of moment-based control schemes. The descriptive form of the interaction matrix related to the Legendre moments computed from a set of points is rst derived. Six visual features are then selected to design a partially-decoupled control scheme. Finally simulated and experimental results are presented to illustrate the validity of our proposal.


Author(s):  
Ruoyu Tan ◽  
Manish Kumar

This paper addresses the problem of controlling a rotary wing Unmanned Aerial Vehicle (UAV) tracking a target moving on ground. The target tracking problem by UAVs has received much attention recently and several techniques have been developed in literature most of which have been applied to fixed wing aircrafts. The use of quadrotor UAVs, the subject of this paper, for target tracking presents several challenges especially for highly maneuvering targets since the development of time-optimal controller (required if target is maneuvering fast) for quadrotor UAVs is extremely difficult due to highly non-linear dynamics. The primary contribution of this paper is the development of a proportional navigation (PN) based method and its implementation on quad-rotor UAVs to track moving ground target. The PN techniques are known to be time-optimal in nature and have been used in literature for developing guidance systems for missiles. There are several types of guidance laws that come within the broad umbrella of the PN method. The paper compares the performance of these guidance laws for their application on quadrotors and chooses the one that performs the best. Furthermore, to apply this method for target tracking instead of the traditional objective of target interception, a switching strategy has also been designed. The method has been compared with respect to the commonly used Proportional Derivative (PD) method for target tracking. The experiments and numerical simulations performed using maneuvering targets show that the proposed tracking method not only carries out effective tracking but also results into smaller oscillations and errors when compared to the widely used PD tracking method.


Robotica ◽  
1991 ◽  
Vol 9 (2) ◽  
pp. 203-212 ◽  
Author(s):  
Won Jang ◽  
Kyungjin Kim ◽  
Myungjin Chung ◽  
Zeungnam Bien

SUMMARYFor efficient visual servoing of an “eye-in-hand” robot, the concepts of Augmented Image Space and Transformed Feature Space are presented in the paper. A formal definition of image features as functionals is given along with a technique to use defined image features for visual servoing. Compared with other known methods, the proposed concepts reduce the computational burden for visual feedback, and enhance the flexibility in describing the vision-based task. Simulations and real experiments demonstrate that the proposed concepts are useful and versatile tools for the industrial robot vision tasks, and thus the visual servoing problem can be dealt with more systematically.


Sign in / Sign up

Export Citation Format

Share Document