visual servoing
Recently Published Documents


TOTAL DOCUMENTS

2463
(FIVE YEARS 400)

H-INDEX

62
(FIVE YEARS 8)

2022 ◽  
Vol 73 ◽  
pp. 102237
Author(s):  
Kleber Roberto da Silva Santos ◽  
Emília Villani ◽  
Wesley Rodrigues de Oliveira ◽  
Augusto Dttman

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 642
Author(s):  
Zubair Arif ◽  
Yili Fu

Assistive robotic arms (ARAs) that provide care to the elderly and people with disabilities, are a significant part of Human-Robot Interaction (HRI). Presently available ARAs provide non-intuitive interfaces such as joysticks for control and thus, lacks the autonomy to perform daily activities. This study proposes that, for inducing autonomous behavior in ARAs, visual sensors integration is vital, and visual servoing in the direct Cartesian control mode is the preferred method. Generally, ARAs are designed in a configuration where its end-effector’s position is defined in the fixed base frame while orientation is expressed in the end-effector frame. We denoted this configuration as ‘mixed frame robotic arms’. Consequently, conventional visual servo controllers which operate in a single frame of reference are incompatible with mixed frame ARAs. Therefore, we propose a mixed-frame visual servo control framework for ARAs. Moreover, we enlightened the task space kinematics of a mixed frame ARAs, which led us to the development of a novel “mixed frame Jacobian matrix”. The proposed framework was validated on a mixed frame JACO-2 7 DoF ARA using an adaptive proportional derivative controller for achieving image-based visual servoing (IBVS), which showed a significant increase of 31% in the convergence rate, outperforming conventional IBVS joint controllers, especially in the outstretched arm positions and near the base frame. Our Results determine the need for the mixed frame controller for deploying visual servo control on modern ARAs, that can inherently cater to the robotic arm’s joint limits, singularities, and self-collision problems.


Author(s):  
Xiaoqian Huang ◽  
Mohamad Halwani ◽  
Rajkumar Muthusamy ◽  
Abdulla Ayyad ◽  
Dewald Swart ◽  
...  

AbstractRobotic vision plays a key role for perceiving the environment in grasping applications. However, the conventional framed-based robotic vision, suffering from motion blur and low sampling rate, may not meet the automation needs of evolving industrial requirements. This paper, for the first time, proposes an event-based robotic grasping framework for multiple known and unknown objects in a cluttered scene. With advantages of microsecond-level sampling rate and no motion blur of event camera, the model-based and model-free approaches are developed for known and unknown objects’ grasping respectively. The event-based multi-view approach is used to localize the objects in the scene in the model-based approach, and then point cloud processing is utilized to cluster and register the objects. The proposed model-free approach, on the other hand, utilizes the developed event-based object segmentation, visual servoing and grasp planning to localize, align to, and grasp the targeting object. Using a UR10 robot with an eye-in-hand neuromorphic camera and a Barrett hand gripper, the proposed approaches are experimentally validated with objects of different sizes. Furthermore, it demonstrates robustness and a significant advantage over grasping with a traditional frame-based camera in low-light conditions.


Author(s):  
Qingxuan Gongye ◽  
Peng Cheng ◽  
Jiuxiang Dong

For the depth estimation problem in the image-based visual servoing (IBVS) control, this paper proposes a new observer structure based on Kalman filter (KF) to recover the feature depth in real time. First, according to the number of states, two different mathematical models of the system are established. The first one is to extract the depth information from the Jacobian matrix as the state vector of the system. The other is to use the depth information and the coordinate point information of the two-dimensional image plane as the state vector of the system. The KF is used to estimate the unknown depth information of the system in real time. And an IBVS controller gain adjustment method for 6-degree-of-freedom (6-DOF) manipulator is obtained using fuzzy controller. This method can obtain the gain matrix by taking the depth and error information as the input of the fuzzy controller. Compared with the existing works, the proposed observer has less redundant motion while solving the Jacobian matrix depth estimation problem. At the same time, it will also be beneficial to reducing the time for the camera to reach the target. Conclusively, the experimental results of the 6-DOF robot with eye-in-hand configuration demonstrate the effectiveness and practicability of the proposed method.


2021 ◽  
Vol 104 (1) ◽  
Author(s):  
Jing Xin ◽  
Caixia Dong ◽  
Youmin Zhang ◽  
Yumeng Yao ◽  
Ailing Gong

AbstractAiming at satisfying the increasing demand of family service robots for housework, this paper proposes a robot visual servoing scheme based on the randomized trees to complete the visual servoing task of unknown objects in natural scenes. Here, “unknown” means that there is no prior information on object models, such as template or database of the object. Firstly, an object to be manipulated is randomly selected by user prior to the visual servoing task execution. Then, the raw image information about the object can be obtained and used to train a randomized tree classifier online. Secondly, the current image features can be computed using the well-trained classifier. Finally, the visual controller can be designed according to the error of image feature, which is defined as the difference between the desired image features and current image features. Five visual positioning of unknown objects experiments, including 2D rigid object and 3D non-rigid object, are conducted on a MOTOMAN-SV3X six degree-of-freedom (DOF) manipulator robot. Experimental results show that the proposed scheme can effectively position an unknown object in complex natural scenes, such as occlusion and illumination changes. Furthermore, the developed robot visual servoing scheme has an excellent positioning accuracy within 0.05 mm positioning error.


2021 ◽  
Vol 11 (23) ◽  
pp. 11566
Author(s):  
Alireza Rastegarpanah ◽  
Ali Aflakian ◽  
Rustam Stolkin

This study proposes a hybrid visual servoing technique that is optimised to tackle the shortcomings of classical 2D, 3D and hybrid visual servoing approaches. These shortcomings are mostly the convergence issues, image and robot singularities, and unreachable trajectories for the robot. To address these deficiencies, 3D estimation of the visual features was used to control the translations in Z-axis as well as all rotations. To speed up the visual servoing (VS) operation, adaptive gains were used. Damped Least Square (DLS) approach was used to reduce the robot singularities and smooth out the discontinuities. Finally, manipulability was established as a secondary task, and the redundancy of the robot was resolved using the classical projection operator. The proposed approach is compared with the classical 2D, 3D and hybrid visual servoing methods in both simulation and real-world. The approach offers more efficient trajectories for the robot, with shorter camera paths than 2D image-based and classical hybrid VS methods. In comparison with the traditional position-based approach, the proposed method is less likely to lose the object from the camera scene, and it is more robust to the camera calibrations. Moreover, the proposed approach offers greater robot controllability (higher manipulability) than other approaches.


Sign in / Sign up

Export Citation Format

Share Document