robot grasping
Recently Published Documents


TOTAL DOCUMENTS

136
(FIVE YEARS 44)

H-INDEX

18
(FIVE YEARS 6)

2021 ◽  
Author(s):  
Gang Peng ◽  
Jinhu Liao ◽  
Shangbin Guan ◽  
Jin Yang ◽  
Xinde Li

Abstract In the field of intelligent manufacturing, robot grasping and sorting is an important content. However, in the traditional 2D camera-based robotic arm grasping method, the grasping efficiency is low and the grasping accuracy is low when facing the scene of stacking and occlusion. Insufficiency and other issues, a dual perspective-based deep reinforcement learning promotion and capture method is proposed. In this case, a novel method of pushing-grasping collaborative based on the deep Q-network in dual viewpoints is proposed in this paper. This method adopts an improved deep Q-network algorithm, with an RGB-D camera to obtain the information of objects’ RGB images and point clouds from two viewpoints, and combines the pushing and grasping actions, so that the trained manipulator can make the scenes better for grasping, so that it can perform well in more complicated grasping scenes. What’s more, we improved the reward function of the deep Q-network and propose the piecewise reward function to speed up the convergence of the deep Q-network. We trained different models and tried different methods in the V-REP simulation environment, and it drew a conclusion that the method proposed in this paper converges quickly and the success rate of grasping objects in unstructured scenes raises up to 83.5\%. Besides, it shows the generalization ability and well performance when novel objects appear in the scenes that the manipulator has never grasped before.


2021 ◽  
Vol 11 (16) ◽  
pp. 7593
Author(s):  
Hyun-Chul Kang ◽  
Hyo-Nyoung Han ◽  
Hee-Chul Bae ◽  
Min-Gi Kim ◽  
Ji-Yeon Son ◽  
...  

We propose a simple and robust HSV color-space-based algorithm that can automatically extract object position information without human intervention or prior knowledge. In manufacturing sites with high variability, it is difficult to recognize products through robot machine vision, especially in terms of extracting object information accurately, owing to various environmental factors such as the noise around objects, shadows, light reflections, and illumination interferences. The proposed algorithm, which does not require users to reset the HSV color threshold value whenever a product is changed, uses ROI referencing method to solve this problem. The algorithm automatically identifies the object’s location by using the HSV color-space-based ROI random sampling, ROI similarity comparison, and ROI merging. The proposed system utilizes an IoT device with several modules for the detection, analysis, control, and management of object data. The experimental results show that the proposed algorithm is very useful for industrial automation applications under complex and highly variable manufacturing environments.


2021 ◽  
Vol 15 ◽  
Author(s):  
Guoyu Zuo ◽  
Jiayuan Tong ◽  
Hongxing Liu ◽  
Wenbai Chen ◽  
Jianfeng Li

To grasp the target object stably and orderly in the object-stacking scenes, it is important for the robot to reason the relationships between objects and obtain intelligent manipulation order for more advanced interaction between the robot and the environment. This paper proposes a novel graph-based visual manipulation relationship reasoning network (GVMRN) that directly outputs object relationships and manipulation order. The GVMRN model first extracts features and detects objects from RGB images, and then adopts graph convolutional network (GCN) to collect contextual information between objects. To improve the efficiency of relation reasoning, a relationship filtering network is built to reduce object pairs before reasoning. The experiments on the Visual Manipulation Relationship Dataset (VMRD) show that our model significantly outperforms previous methods on reasoning object relationships in object-stacking scenes. The GVMRN model is also tested on the images we collected and applied on the robot grasping platform. The results demonstrated the generalization and applicability of our method in real environment.


2021 ◽  
pp. 027836492110272
Author(s):  
Yu She ◽  
Shaoxiong Wang ◽  
Siyuan Dong ◽  
Neha Sunil ◽  
Alberto Rodriguez ◽  
...  

Cables are complex, high-dimensional, and dynamic objects. Standard approaches to manipulate them often rely on conservative strategies that involve long series of very slow and incremental deformations, or various mechanical fixtures such as clamps, pins, or rings. We are interested in manipulating freely moving cables, in real time, with a pair of robotic grippers, and with no added mechanical constraints. The main contribution of this paper is a perception and control framework that moves in that direction, and uses real-time tactile feedback to accomplish the task of following a dangling cable. The approach relies on a vision-based tactile sensor, GelSight, that estimates the pose of the cable in the grip, and the friction forces during cable sliding. We achieve the behavior by combining two tactile-based controllers: (1) cable grip controller, where a PD controller combined with a leaky integrator regulates the gripping force to maintain the frictional sliding forces close to a suitable value; and (2) cable pose controller, where an linear–quadratic regulator controller based on a learned linear model of the cable sliding dynamics keeps the cable centered and aligned on the fingertips to prevent the cable from falling from the grip. This behavior is possible with the use of reactive gripper fitted with GelSight-based high-resolution tactile sensors. The robot can follow 1 m of cable in random configurations within two to three hand regrasps, adapting to cables of different materials and thicknesses. We demonstrate a robot grasping a headphone cable, sliding the fingers to the jack connector, and inserting it. To the best of the authors’ knowledge, this is the first implementation of real-time cable following without the aid of mechanical fixtures. Videos are available at http://gelsight.csail.mit.edu/cable/


Author(s):  
Junying Yao ◽  
Yongkui Liu ◽  
Tingyu Lin ◽  
Xubin Ping ◽  
He Xu ◽  
...  

Abstract For the past few years, training robots to enable them to learn various manipulative skills using deep reinforcement learning (DRL) has arisen wide attention. However, large search space, low sample quality, and difficulties in network convergence pose great challenges to robot training. This paper deals with assembly-oriented robot grasping training and proposes a DRL algorithm with a new mechanism, namely, policy guidance mechanism (PGM). PGM can effectively transform useless or low-quality samples to useful or high-quality ones. Based on the improved Deep Q Network algorithm, an end-to-end policy model that takes images as input and outputs actions is established. Through continuous interactions with the environment, robots are able to learn how to optimally grasp objects according to the location of maximum Q value. A number of experiments for different scenarios using simulations and physical robots are conducted. Results indicate that the proposed DRL algorithm with PGM is effective in increasing the success rate of robot grasping, and moreover, is robust to changes of environment and objects.


Machines ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 119
Author(s):  
Tong Li ◽  
Xuguang Sun ◽  
Xin Shu ◽  
Chunkai Wang ◽  
Yifan Wang ◽  
...  

As an essential perceptual device, the tactile sensor can efficiently improve robot intelligence by providing contact force perception to develop algorithms based on contact force feedback. However, current tactile grasping technology lacks high-performance sensors and high-precision grasping prediction models, which limits its broad application. Herein, an intelligent robot grasping system that combines a highly sensitive tactile sensor array was constructed. A dataset that can reflect the grasping contact force of various objects was set up by multiple grasping operation feedback from a tactile sensor array. The stability state of each grasping operation was also recorded. On this basis, grasp stability prediction models with good performance in grasp state judgment were proposed. By feeding training data into different machine learning algorithms and comparing the judgment results, the best grasp prediction model for different scenes can be obtained. The model was validated to be efficient, and the judgment accuracy was over 98% in grasp stability prediction with limited training data. Further, experiments prove that the real-time contact force input based on the feedback of the tactile sensor array can periodically control robots to realize stable grasping according to the real-time grasping state of the prediction model.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Zhengtuo Wang ◽  
Yuetong Xu ◽  
Guanhua Xu ◽  
Jianzhong Fu ◽  
Jiongyan Yu ◽  
...  

Purpose In this work, the authors aim to provide a set of convenient methods for generating training data, and then develop a deep learning method based on point clouds to estimate the pose of target for robot grasping. Design/methodology/approach This work presents a deep learning method PointSimGrasp on point clouds for robot grasping. In PointSimGrasp, a point cloud emulator is introduced to generate training data and a pose estimation algorithm, which, based on deep learning, is designed. After trained with the emulation data set, the pose estimation algorithm could estimate the pose of target. Findings In experiment part, an experimental platform is built, which contains a six-axis industrial robot, a binocular structured-light sensor and a base platform with adjustable inclination. A data set that contains three subsets is set up on the experimental platform. After trained with the emulation data set, the PointSimGrasp is tested on the experimental data set, and an average translation error of about 2–3 mm and an average rotation error of about 2–5 degrees are obtained. Originality/value The contributions are as follows: first, a deep learning method on point clouds is proposed to estimate 6D pose of target; second, a convenient training method for pose estimation algorithm is presented and a point cloud emulator is introduced to generate training data; finally, an experimental platform is built, and the PointSimGrasp is tested on the platform.


Sign in / Sign up

Export Citation Format

Share Document