Algorithm for pose estimation based on objective function with uncertainty-weighted measuring error of feature point cling to the curved surface

2018 ◽  
Vol 57 (12) ◽  
pp. 3306 ◽  
Author(s):  
Ju Huo ◽  
Guiyang Zhang ◽  
Ming Yang
2018 ◽  
Vol 26 (4) ◽  
pp. 834-842 ◽  
Author(s):  
霍炬 HUO Ju ◽  
张贵阳 ZHANG Gui-yang ◽  
崔家山 CUI Jia-shan ◽  
杨明 YANG Ming

Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5670
Author(s):  
Gwangsoo Park ◽  
Byungjin Lee ◽  
Sangkyung Sung

Point cloud data is essential measurement information that has facilitated an extended functionality horizon for urban mobility. While 3D lidar and image-depth sensors are superior in implementing mapping and localization, sense and avoidance, and cognitive exploration in an unknown area, applying 2D lidar is inevitable for systems with limited resources of weight and computational power, for instance, in an aerial mobility system. In this paper, we propose a new pose estimation scheme that reflects the characteristics of extracted feature point information from 2D lidar on the NDT framework for exploiting an improved point cloud registration. In the case of the 2D lidar point cloud, vertices and corners can be viewed as representative feature points. Based on this feature point information, a point-to-point relationship is functionalized and reflected on a voxelized map matching process to deploy more efficient and promising matching performance. In order to present the navigation performance of the mobile object to which the proposed algorithm is applied, the matching result is combined with the inertial navigation through an integration filter. Then, the proposed algorithm was verified through a simulation study using a high-fidelity flight simulator and an indoor experiment. For performance validation, both results were compared and analyzed with the previous techniques. In conclusion, it was demonstrated that improved accuracy and computational efficiency could be achieved through the proposed algorithms.


2021 ◽  
Author(s):  
Tianyi Liu ◽  
Yan Wang ◽  
xiaoji niu ◽  
Chang Le ◽  
Tisheng Zhang ◽  
...  

KITTI dataset is collected from three types of environments, i.e., country, urban and highway The types of feature point cover a variety of scenes. The KITTI dataset provides 22 sequences of LiDAR data. 11 sequences of them from sequence 00 to sequence 10 are "training" data. The training data are provided with ground truth translation and rotation. In addition, field experiment data is collected by low-resolution LiDAR, VLP-16 in Wuhan Research and Innovation Center.


Author(s):  
M. Coenen ◽  
F. Rottensteiner ◽  
C. Heipke

<p><strong>Abstract.</strong> Interactive motion planing and collaborative positioning will play a key role in future autonomous driving applications. For this purpose, the precise reconstruction and pose estimation of other traffic participants, especially of other vehicles, is a fundamental task and will be tackled in this paper based on street level stereo images obtained from a moving vehicle. We learn a shape prior, consisting of vehicle geometry and appearance features, and we fit a vehicle model to initially detected vehicles. This is achieved by minimising an energy function, jointly incorporating 3D and 2D information to infer the model’s optimal and precise pose parameters. For evaluation we use the object detection and orientation benchmark of the KITTI dataset (Geiger et al., 2012). We can show a significant benefit of each of the individual energy terms of the overall objective function. We achieve good results with up to 94.8% correct and precise pose estimations with an average absolute error smaller than 3&amp;deg; for the orientation and 33&amp;thinsp;cm for position.</p>


Author(s):  
SU YAN ◽  
Lei Yu

Abstract Simultaneous Localization and Mapping (SLAM) is one of the key technologies used in sweepers, autonomous vehicles, virtual reality and other fields. This paper presents a dense RGB-D SLAM reconstruction algorithm based on convolutional neural network of multi-layer image invariant feature transformation. The main contribution of the system lies in the construction of a convolutional neural network based on multi-layer image invariant feature, which optimized the extraction of ORB (Oriented FAST and Rotated Brief) feature points and the reconstruction effect. After the feature point matching, pose estimation, loop detection and other steps, the 3D point clouds were finally spliced to construct a complete and smooth spatial model. The system can improve the accuracy and robustness in feature point processing and pose estimation. Comparative experiments show that the optimized algorithm saves 0.093s compared to the ordinary extraction algorithm while guaranteeing a high accuracy rate at the same time. The results of reconstruction experiments show that the spatial models have more clear details, smoother connection with no fault layers than the original ones. The reconstruction results are generally better than other common algorithms, such as Kintinuous, Elasticfusion and ORBSLAM2 dense reconstruction.


2021 ◽  
Author(s):  
Tianyi Liu ◽  
Yan Wang ◽  
xiaoji niu ◽  
Chang Le ◽  
Tisheng Zhang ◽  
...  

KITTI dataset is collected from three types of environments, i.e., country, urban and highway The types of feature point cover a variety of scenes. The KITTI dataset provides 22 sequences of LiDAR data. 11 sequences of them from sequence 00 to sequence 10 are "training" data. The training data are provided with ground truth translation and rotation. In addition, field experiment data is collected by low-resolution LiDAR, VLP-16 in Wuhan Research and Innovation Center.


2015 ◽  
Vol 63 (4) ◽  
pp. 897-905
Author(s):  
K. Sudars ◽  
R. Cacurs ◽  
I. Homjakovs ◽  
J. Judvaitis

Abstract For 3D object localization and tracking with multiple cameras the camera poses have to be known within a high precision. The paper evaluates camera pose estimation via a fundamental matrix and via the known object in environment of multiple static cameras. A special feature point extraction technique based on LED (Light Emitting Diodes) point detection and matching has been developed for this purpose. LED point detection has been solved searching local maximums in images and LED point matching has been solved involving patterned time functions for each light source. Emitting LEDs have been used as sources of known reference points instead of typically used feature point extractors like ORB, SIFT, SURF etc. In such a way the robustness of pose estimation has been obtained. Camera pose estimation is significant for object localization using the networks with multiple cameras which are going to an play increasingly important role in modern Smart Cities environments.


Sign in / Sign up

Export Citation Format

Share Document