An improved 3D object feature points correspondence algorithm

Author(s):  
Haiyan Zhang ◽  
Jianxin Wang ◽  
Wei Meng
2010 ◽  
Vol 36 ◽  
pp. 485-493
Author(s):  
Abu Bakar Elmi ◽  
Tetsuo Miyake ◽  
Shinya Naito ◽  
Takashi Imamura ◽  
Zhong Zhang

In production line, pose estimation of 3D object of products is needed beforehand. In order to perform shape measurement of the objects corresponding to speed of the mass production lines before the contact measurement is done, the information of object pose and matching is become required. In this paper, we conducted a study on the performance of model based and view based pose estimation method using image sequence of a rotating 3D object. In model based, we used object feature points from center of gravity and in view based method, the subspace calculation by block diagonalization of matrix represents a transformation an image to another image. We have confirmed the both method performance and it’s considered useful for pose estimation.


Electronics ◽  
2021 ◽  
Vol 10 (10) ◽  
pp. 1205
Author(s):  
Zhiyu Wang ◽  
Li Wang ◽  
Bin Dai

Object detection in 3D point clouds is still a challenging task in autonomous driving. Due to the inherent occlusion and density changes of the point cloud, the data distribution of the same object will change dramatically. Especially, the incomplete data with sparsity or occlusion can not represent the complete characteristics of the object. In this paper, we proposed a novel strong–weak feature alignment algorithm between complete and incomplete objects for 3D object detection, which explores the correlations within the data. It is an end-to-end adaptive network that does not require additional data and can be easily applied to other object detection networks. Through a complete object feature extractor, we achieve a robust feature representation of the object. It serves as a guarding feature to help the incomplete object feature generator to generate effective features. The strong–weak feature alignment algorithm reduces the gap between different states of the same object and enhances the ability to represent the incomplete object. The proposed adaptation framework is validated on the KITTI object benchmark and gets about 6% improvement in detection average precision on 3D moderate difficulty compared to the basic model. The results show that our adaptation method improves the detection performance of incomplete 3D objects.


2011 ◽  
Vol 464 ◽  
pp. 24-27
Author(s):  
Zhen Ying Xu ◽  
Ran Ran Xu ◽  
Dan Dan Cao ◽  
Yun Wang

A new robust structured light technique based on multi-valued pseudo-random color encoded pattern is discussed in this paper. After analyzing the advantages and disadvantages of the existing pseudo-random coding patterns in computer vision, a new multi-valued pseudo-random color encoded pattern is designed on the basis of the combination of the feature points and the feature lines. Using this pattern, the feature points are easy to extract, and the problems of leaking points and pseudo-feature points are greatly reduced. Furthermore, it also reduced the difficulty and the complexity of the feature matching because of the feature lines.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Sign in / Sign up

Export Citation Format

Share Document