Point cloud deep learning for multiple object pose estimation

Author(s):  
Yukihiro TODA ◽  
Naoya CHIBA ◽  
Koichi HASHIMOTO
2021 ◽  
Vol 27 (8) ◽  
pp. 593-601
Author(s):  
You Chan No ◽  
YoungWoo Kim ◽  
Daegun Kim ◽  
Hyeon-Gyu Han ◽  
Young-ki Song ◽  
...  

2020 ◽  
Vol 108 (4) ◽  
pp. 1217-1231
Author(s):  
Zhengtuo Wang ◽  
Yuetong Xu ◽  
Quan He ◽  
Zehua Fang ◽  
Guanhua Xu ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6790
Author(s):  
Chi Xu ◽  
Jiale Chen ◽  
Mengyang Yao ◽  
Jun Zhou ◽  
Lijun Zhang ◽  
...  

6DoF object pose estimation is a foundation for many important applications, such as robotic grasping, automatic driving, and so on. However, it is very challenging to estimate 6DoF pose of transparent object which is commonly seen in our daily life, because the optical characteristics of transparent material lead to significant depth error which results in false estimation. To solve this problem, a two-stage approach is proposed to estimate 6DoF pose of transparent object from a single RGB-D image. In the first stage, the influence of the depth error is eliminated by transparent segmentation, surface normal recovering, and RANSAC plane estimation. In the second stage, an extended point-cloud representation is presented to accurately and efficiently estimate object pose. As far as we know, it is the first deep learning based approach which focuses on 6DoF pose estimation of transparent objects from a single RGB-D image. Experimental results show that the proposed approach can effectively estimate 6DoF pose of transparent object, and it out-performs the state-of-the-art baselines by a large margin.


Author(s):  
Yongbin Sun ◽  
Sai Nithin Reddy Kantareddy ◽  
Joshua Siegel ◽  
Alexandre Armengol-Urpi ◽  
Xiaoyu Wu ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6473
Author(s):  
Tyson Phillips ◽  
Tim D’Adamo ◽  
Peter McAree

The capability to estimate the pose of known geometry from point cloud data is a frequently arising requirement in robotics and automation applications. This problem is directly addressed by Iterative Closest Point (ICP), however, this method has several limitations and lacks robustness. This paper makes the case for an alternative method that seeks to find the most likely solution based on available evidence. Specifically, an evidence-based metric is described that seeks to find the pose of the object that would maximise the conditional likelihood of reproducing the observed range measurements. A seedless search heuristic is also provided to find the most likely pose estimate in light of these measurements. The method is demonstrated to provide for pose estimation (2D and 3D shape poses as well as joint-space searches), object identification/classification, and platform localisation. Furthermore, the method is shown to be robust in cluttered or non-segmented point cloud data as well as being robust to measurement uncertainty and extrinsic sensor calibration.


Author(s):  
Mathieu Turgeon-Pelchat ◽  
Samuel Foucher ◽  
Yacine Bouroubi

Sign in / Sign up

Export Citation Format

Share Document