unknown objects
Recently Published Documents


TOTAL DOCUMENTS

259
(FIVE YEARS 61)

H-INDEX

21
(FIVE YEARS 2)

Author(s):  
Xiaoqian Huang ◽  
Mohamad Halwani ◽  
Rajkumar Muthusamy ◽  
Abdulla Ayyad ◽  
Dewald Swart ◽  
...  

AbstractRobotic vision plays a key role for perceiving the environment in grasping applications. However, the conventional framed-based robotic vision, suffering from motion blur and low sampling rate, may not meet the automation needs of evolving industrial requirements. This paper, for the first time, proposes an event-based robotic grasping framework for multiple known and unknown objects in a cluttered scene. With advantages of microsecond-level sampling rate and no motion blur of event camera, the model-based and model-free approaches are developed for known and unknown objects’ grasping respectively. The event-based multi-view approach is used to localize the objects in the scene in the model-based approach, and then point cloud processing is utilized to cluster and register the objects. The proposed model-free approach, on the other hand, utilizes the developed event-based object segmentation, visual servoing and grasp planning to localize, align to, and grasp the targeting object. Using a UR10 robot with an eye-in-hand neuromorphic camera and a Barrett hand gripper, the proposed approaches are experimentally validated with objects of different sizes. Furthermore, it demonstrates robustness and a significant advantage over grasping with a traditional frame-based camera in low-light conditions.


2021 ◽  
Vol 104 (1) ◽  
Author(s):  
Jing Xin ◽  
Caixia Dong ◽  
Youmin Zhang ◽  
Yumeng Yao ◽  
Ailing Gong

AbstractAiming at satisfying the increasing demand of family service robots for housework, this paper proposes a robot visual servoing scheme based on the randomized trees to complete the visual servoing task of unknown objects in natural scenes. Here, “unknown” means that there is no prior information on object models, such as template or database of the object. Firstly, an object to be manipulated is randomly selected by user prior to the visual servoing task execution. Then, the raw image information about the object can be obtained and used to train a randomized tree classifier online. Secondly, the current image features can be computed using the well-trained classifier. Finally, the visual controller can be designed according to the error of image feature, which is defined as the difference between the desired image features and current image features. Five visual positioning of unknown objects experiments, including 2D rigid object and 3D non-rigid object, are conducted on a MOTOMAN-SV3X six degree-of-freedom (DOF) manipulator robot. Experimental results show that the proposed scheme can effectively position an unknown object in complex natural scenes, such as occlusion and illumination changes. Furthermore, the developed robot visual servoing scheme has an excellent positioning accuracy within 0.05 mm positioning error.


2021 ◽  
Vol 103 (4) ◽  
Author(s):  
Hui Zhang ◽  
Jef Peeters ◽  
Eric Demeester ◽  
Karel Kellens

2021 ◽  
Vol 2083 (4) ◽  
pp. 042030
Author(s):  
Ziang Xu

Abstract This paper presents a light-weight Hierarchical Fusion Convolutional Neural Network (HF-CNN) which can be used for grasping detection. The network mainly employs residual structures, atrous spatial pyramid pooling (ASPP) and coding-decoding based feature fusion. Compared with the usual grasping detection, the network in this paper greatly improves the robustness and generalizability on detecting tasks by extensively extracting feature information of the images. In our test with the Cornell University dataset, we achieve 85% accuracy when detecting the unknown objects.


Author(s):  
Abdulrahman Al-Shanoon ◽  
Haoxiang Lang ◽  
Ying Wang ◽  
Yunfei Zhang ◽  
Wenxin Hong

2021 ◽  
Vol 8 ◽  
Author(s):  
Muhammad Sami Siddiqui ◽  
Claudio Coppola ◽  
Gokhan Solak ◽  
Lorenzo Jamone

Grasp stability prediction of unknown objects is crucial to enable autonomous robotic manipulation in an unstructured environment. Even if prior information about the object is available, real-time local exploration might be necessary to mitigate object modelling inaccuracies. This paper presents an approach to predict safe grasps of unknown objects using depth vision and a dexterous robot hand equipped with tactile feedback. Our approach does not assume any prior knowledge about the objects. First, an object pose estimation is obtained from RGB-D sensing; then, the object is explored haptically to maximise a given grasp metric. We compare two probabilistic methods (i.e. standard and unscented Bayesian Optimisation) against random exploration (i.e. uniform grid search). Our experimental results demonstrate that these probabilistic methods can provide confident predictions after a limited number of exploratory observations, and that unscented Bayesian Optimisation can find safer grasps, taking into account the uncertainty in robot sensing and grasp execution.


Author(s):  
Metin Ozkan ◽  
Sezgin Secil ◽  
Kaya Turgut ◽  
Helin Dutagaci ◽  
Cihan Uyanik ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document