Combining stereo vision and fuzzy image based visual servoing for autonomous object grasping using a 6-DOF manipulator

Author(s):  
Le Duc Hanh ◽  
Chyi-Yeu Lin
Author(s):  
Hiroshi KASE ◽  
Noriaki MARU ◽  
Atsushi NISHIKAWA ◽  
Fumio MIYAZAKI

2017 ◽  
Vol 12 (1) ◽  
pp. 34-39
Author(s):  
Lei Shi

Abstract In this paper, an object recognition method and a pose estimation approach using stereo vision is presented. The proposed approach was used for position based visual servoing of a 6 DoF manipulator. The object detection and recognition method was designed with the purpose of increasing robustness. A RGB color-based object descriptor and an online correction method is proposed for object detection and recognition. Pose was estimated by using the depth information derived from stereo vision camera and an SVD based method. Transformation between the desired pose and object pose was calculated and later used for position based visual servoing. Experiments were carried out to verify the proposed approach for object recognition. The stereo camera was also tested to see whether the depth accuracy is adequate. The proposed object recognition method is invariant to scale, orientation and lighting condition which increases the level of robustness. The accuracy of stereo vision camera can reach 1 mm. The accuracy is adequate for tasks such as grasping and manipulation.


2011 ◽  
Vol 55-57 ◽  
pp. 868-871
Author(s):  
Qin Jun Du ◽  
Xue Yi Zhang ◽  
Xing Guo Huang

Humanoid robot is not only expected to walk stably, but also is required to perform manipulation tasks autonomously in our work and living environment. This paper discusses the visual perception and the object manipulation based on visual servoing of a humanoid robot, an active robot vision model is built, and then the 3D location principle, the calibration method and precision of this model are analyzed. This active robot vision system with two DOF enlarges its visual field and the stereo is the most simple camera configuration for 3D position information.


Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1437
Author(s):  
Petar Durdevic ◽  
Daniel Ortiz-Arroyo

This paper describes a novel stereo vision sensor based on deep neural networks, that can be used to produce a feedback signal for visual servoing in unmanned aerial vehicles such as drones. Two deep convolutional neural networks attached to the stereo camera in the drone are trained to detect wind turbines in images and stereo triangulation is used to calculate the distance from a wind turbine to the drone. Our experimental results show that the sensor produces data accurate enough to be used for servoing, even in the presence of noise generated when the drone is not being completely stable. Our results also show that appropriate filtering of the signals is needed and that to produce correct results, it is very important to keep the wind turbine within the field of vision of both cameras, so that both deep neural networks could detect it.


Sign in / Sign up

Export Citation Format

Share Document