GCCN: Geometric Constraint Co-attention Network for 6D Object Pose Estimation

2021 ◽  
Author(s):  
Yongming Wen ◽  
Yiquan Fang ◽  
Junhao Cai ◽  
Kimwa Tung ◽  
Hui Cheng
Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1299
Author(s):  
Honglin Yuan ◽  
Tim Hoogenkamp ◽  
Remco C. Veltkamp

Deep learning has achieved great success on robotic vision tasks. However, when compared with other vision-based tasks, it is difficult to collect a representative and sufficiently large training set for six-dimensional (6D) object pose estimation, due to the inherent difficulty of data collection. In this paper, we propose the RobotP dataset consisting of commonly used objects for benchmarking in 6D object pose estimation. To create the dataset, we apply a 3D reconstruction pipeline to produce high-quality depth images, ground truth poses, and 3D models for well-selected objects. Subsequently, based on the generated data, we produce object segmentation masks and two-dimensional (2D) bounding boxes automatically. To further enrich the data, we synthesize a large number of photo-realistic color-and-depth image pairs with ground truth 6D poses. Our dataset is freely distributed to research groups by the Shape Retrieval Challenge benchmark on 6D pose estimation. Based on our benchmark, different learning-based approaches are trained and tested by the unified dataset. The evaluation results indicate that there is considerable room for improvement in 6D object pose estimation, particularly for objects with dark colors, and photo-realistic images are helpful in increasing the performance of pose estimation algorithms.


Author(s):  
Alexander Krull ◽  
Eric Brachmann ◽  
Sebastian Nowozin ◽  
Frank Michel ◽  
Jamie Shotton ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4064
Author(s):  
Can Li ◽  
Ping Chen ◽  
Xin Xu ◽  
Xinyu Wang ◽  
Aijun Yin

In this work, we propose a novel coarse-to-fine method for object pose estimation coupled with admittance control to promote robotic shaft-in-hole assembly. Considering that traditional approaches to locate the hole by force sensing are time-consuming, we employ 3D vision to estimate the axis pose of the hole. Thus, robots can locate the target hole in both position and orientation and enable the shaft to move into the hole along the axis orientation. In our method, first, the raw point cloud of a hole is processed to acquire the keypoints. Then, a coarse axis is extracted according to the geometric constraints between the surface normals and axis. Lastly, axis refinement is performed on the coarse axis to achieve higher precision. Practical experiments verified the effectiveness of the axis pose estimation. The assembly strategy composed of axis pose estimation and admittance control was effectively applied to the robotic shaft-in-hole assembly.


2021 ◽  
Vol 218 ◽  
pp. 106839
Author(s):  
Pengshuai Yin ◽  
Jiayong Ye ◽  
Guoshen Lin ◽  
Qingyao Wu

Sign in / Sign up

Export Citation Format

Share Document