Prescribed performance image based visual servoing under field of view constraints

Author(s):  
Shahab Heshmati-alamdari ◽  
Charalampos P. Bechlioulis ◽  
Minas V. Liarokapis ◽  
Kostas J. Kyriakopoulos
2019 ◽  
Vol 35 (4) ◽  
pp. 1063-1070 ◽  
Author(s):  
Charalampos P. Bechlioulis ◽  
Shahab Heshmati-alamdari ◽  
George C. Karras ◽  
Kostas J. Kyriakopoulos

Author(s):  
J. Li-Chee-Ming ◽  
C. Armenakis

This paper presents a novel application of the Visual Servoing Platform’s (ViSP) for pose estimation in indoor and GPS-denied outdoor environments. Our proposed solution integrates the trajectory solution from RGBD-SLAM into ViSP’s pose estimation process. Li-Chee-Ming and Armenakis (2015) explored the application of ViSP in mapping large outdoor environments, and tracking larger objects (i.e., building models). Their experiments revealed that tracking was often lost due to a lack of model features in the camera’s field of view, and also because of rapid camera motion. Further, the pose estimate was often biased due to incorrect feature matches. This work proposes a solution to improve ViSP’s pose estimation performance, aiming specifically to reduce the frequency of tracking losses and reduce the biases present in the pose estimate. This paper explores the integration of ViSP with RGB-D SLAM. We discuss the performance of the combined tracker in mapping indoor environments and tracking 3D wireframe indoor building models, and present preliminary results from our experiments.


2020 ◽  
Vol 1 (2) ◽  
Author(s):  
Navid Fallahinia ◽  
Stephen A. Mascaro

Abstract A fingernail imaging has been shown to be effective in estimating the finger pad forces along all three directions simultaneously in previous works. However, this method has never been used for the purpose of force measurement during a grasping task with multiple fingers. The objective of this paper is to demonstrate the grasp force-sensing capabilities of the fingernail imaging method integrated with a visual servoing robotic system. In this study, the fingernail imaging method has been used in both constrained and unconstrained multi-digit grasping studies. Visual servoing has been employed to solve the issue of keeping fingernail images in the field of view of the camera during grasping motions. Two grasping experiments have been designed and conducted to show the performance and accuracy of the fingernail imaging method to be used in grasping studies. The maximum value of root-mean-square (RMS) errors for estimated normal and shear forces during constrained grasping has been found to be 0.58 N (5.7%) and 0.49 N (9.2%), respectively. Moreover, a visual servoing system implemented on a 6-degrees-of-freedom (DOF) robot has been devised to ensure that all of the fingers remain in the camera frame at all times. Comparing unconstrained and constrained forces has shown that force collaboration among fingers could change based on the grasping condition.


2011 ◽  
Vol 5 (2) ◽  
pp. 241-246
Author(s):  
Yukinari Inoue ◽  
◽  
Noriaki Maru ◽  

The authors have previously proposed foot tip control for quadruped robots using linear visual servoing (LVS) with a normal stereo camera. However, a normal stereo camera has a narrow field of view and is incapable of seeing all four legs simultaneously. Consequently, it has been a problem that the control of all the legs have required that the rotatation of the camera be controlled. This article proposes a method by which a stereo omnidirectional camera is provided at a position low on the body to control all four legs through LVS. In this article, we at first present a transformation equation from an omnidirectional image to a binocular visual space, and we develop a servo equation of LVS in which an omnidirectional image is used. Then, through simulation, we confirm trajectories with the LVS applied to foot tip control. We also conduct an experiment using TITAN-VIII to demonstrate the efficacy of the proposed method.


Author(s):  
Khaled Hammemi ◽  
Mohamed Atri

<p>In this work, we developed the NSSD-DT method, which allows us to track a target in a robust way. This method effectively overcomes the problems of geometrical deformation of the target, partial occlusion and allows recovery after the target leaves the field of view. The originality of our algorithm is based on a new model, which does not depend on a probabilistic process and does not require data-based beforehand. Experimental results on several difficult video sequences have proven performance benefits. The algorithm is implemented on a BCS 2835 system based on a quad core ARM processor, it is also compared to the software solution. NSSD-DT can be used in several applications such as video surveillance, active vision or industrial visual servoing.</p>


2017 ◽  
Vol 05 (01) ◽  
pp. 1-17 ◽  
Author(s):  
Geoff Fink ◽  
Hui Xie ◽  
Alan F. Lynch ◽  
Martin Jagersand

This paper presents a dynamic image-based visual servoing (IBVS) control law for a quadrotor unmanned aerial vehicle (UAV) equipped with a single fixed on-board camera. The motion control problem is to regulate the relative position and yaw of the vehicle to a moving planar target located within the camera’s field of view. The control law is termed dynamic as it’s based on the dynamics of the vehicle. To simplify the kinematics and dynamics, the control law relies on the notion of a virtual camera and image moments as visual features. The convergence of the closed-loop is proven to be globally asymptotically stable for a horizontal target. In the case of nonhorizontal targets, we modify the control using a homography decomposition. Experimental and simulation results demonstrate the control law’s performance.


2007 ◽  
Vol 129 (4) ◽  
pp. 541-543 ◽  
Author(s):  
Graziano Chesi ◽  
Domenico Prattichizzo ◽  
Antonio Vicino

This paper deals with visual servoing for 6-degree-of-freedom robot manipulators, and considers the problem of establishing whether and how it is possible to reach the desired location while keeping all features in the field of view and following a straight line in the Euclidean space. A path-planning technique based on a parametrization of the camera path through polynomials is proposed, which overcomes existing methods dealing with this problem. The generated image trajectory can be tracked by using an image-based visual servoing controller.


2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Panfeng Huang ◽  
Lu Chen ◽  
Bin Zhang ◽  
Zhongjie Meng ◽  
Zhengxiong Liu

In the ultra-close approaching phase of tethered space robot, a highly stable self-attitude control is essential. However, due to the field of view limitation of cameras, typical point features are difficult to extract, where commonly adopted position-based visual servoing cannot be valid anymore. To provide robot’s relative position and attitude with the target, we propose a monocular visual servoing control method using only the edge lines of satellite brackets. Firstly, real time detection of edge lines is achieved based on image gradient and region growing. Then, we build an edge line based model to estimate the relative position and attitude between the robot and the target. Finally, we design a visual servoing controller combined with PD controller. Experimental results demonstrate that our algorithm can extract edge lines stably and adjust the robot’s attitude to satisfy the grasping requirements.


2011 ◽  
Vol 5 (3) ◽  
pp. 452-457 ◽  
Author(s):  
Akimitsu Imasato ◽  
◽  
Noriaki Maru ◽  

The gaze guidance and control we propose for a nursing robot uses a gaze point detector (GPD) and linear visual servoing (LVS). The robot captures stereo camera images, presents them via a head-mounted display (HMD) to the user, calculates the user’s gaze tracked by the camera, and moves to gaze of the LVS. Since in the proposal, persons requiring nursing share the robot’s field of view via the GPD, the closer they get to the target, the more accurate control becomes. The GPD, on the user’s head, has an HMD and a CCD camera.


Author(s):  
Yuan Fang ◽  
◽  
Zhang Xiaoyong ◽  
Huang Zhiwu ◽  
Wentao Yu ◽  
...  

In this paper, a switched Kalmanfilter (KF) is used to predict the status of feature points leaving the field of view (FOV), which is one of the most common constraints in FOV. By using the prediction of status to compensate for the real state of feature points, nonholonomic robots conduct visual servoing tasks efficiently. Results of simulation and experiments verify the effectiveness of the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document