feature point
Recently Published Documents


TOTAL DOCUMENTS

700
(FIVE YEARS 165)

H-INDEX

23
(FIVE YEARS 4)

Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 223
Author(s):  
Zihao Wang ◽  
Sen Yang ◽  
Mengji Shi ◽  
Kaiyu Qin

In this study, a multi-level scale stabilizer intended for visual odometry (MLSS-VO) combined with a self-supervised feature matching method is proposed to address the scale uncertainty and scale drift encountered in the field of monocular visual odometry. Firstly, the architecture of an instance-level recognition model is adopted to propose a feature matching model based on a Siamese neural network. Combined with the traditional approach to feature point extraction, the feature baselines on different levels are extracted, and then treated as a reference for estimating the motion scale of the camera. On this basis, the size of the target in the tracking task is taken as the top-level feature baseline, while the motion matrix parameters as obtained by the original visual odometry of the feature point method are used to solve the real motion scale of the current frame. The multi-level feature baselines are solved to update the motion scale while reducing the scale drift. Finally, the spatial target localization algorithm and the MLSS-VO are applied to propose a framework intended for the tracking of target on the mobile platform. According to the experimental results, the root mean square error (RMSE) of localization is less than 3.87 cm, and the RMSE of target tracking is less than 4.97 cm, which demonstrates that the MLSS-VO method based on the target tracking scene is effective in resolving scale uncertainty and restricting scale drift, so as to ensure the spatial positioning and tracking of the target.


Author(s):  
Wu Shulei ◽  
Suo Zihang ◽  
Chen Huandong ◽  
Zhao Yuchen ◽  
Zhang Yang ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Di Wang ◽  
Hongying Zhang ◽  
Yanhua Shao

The precise evaluation of camera position and orientation is a momentous procedure of most machine vision tasks, especially visual localization. Aiming at the shortcomings of local features of dealing with changing scenes and the problem of realizing a robust end-to-end network that worked from feature detection to matching, an invariant local feature matching method for changing scene image pairs is proposed, which is a network that integrates feature detection, descriptor constitution, and feature matching. In the feature point detection and descriptor construction stage, joint training is carried out based on a neural network. In the feature point extraction and descriptor construction stage, joint training is carried out based on a neural network. To obtain local features with solid robustness to viewpoint and illumination changes, the Vector of Locally Aggregated Descriptors based on Neural Network (NetVLAD) module is introduced to compute the degree of correlation of description vectors from one image to another counterpart. Then, to enhance the relationship between relevant local features of image pairs, the attentional graph neural network (AGNN) is introduced, and the Sinkhorn algorithm is used to match them; finally, the local feature matching results between image pairs are output. The experimental results show that, compared with the existed algorithms, the proposed method enhances the robustness of local features of varying sights, performs better in terms of homography estimation, matching precision, and recall, and when meeting the requirements of the visual localization system to the environment, the end-to-end network tasks can be realized.


2021 ◽  
Author(s):  
Tianyi Liu ◽  
Yan Wang ◽  
xiaoji niu ◽  
Chang Le ◽  
Tisheng Zhang ◽  
...  

KITTI dataset is collected from three types of environments, i.e., country, urban and highway The types of feature point cover a variety of scenes. The KITTI dataset provides 22 sequences of LiDAR data. 11 sequences of them from sequence 00 to sequence 10 are "training" data. The training data are provided with ground truth translation and rotation. In addition, field experiment data is collected by low-resolution LiDAR, VLP-16 in Wuhan Research and Innovation Center.


2021 ◽  
Author(s):  
Tianyi Liu ◽  
Yan Wang ◽  
xiaoji niu ◽  
Chang Le ◽  
Tisheng Zhang ◽  
...  

KITTI dataset is collected from three types of environments, i.e., country, urban and highway The types of feature point cover a variety of scenes. The KITTI dataset provides 22 sequences of LiDAR data. 11 sequences of them from sequence 00 to sequence 10 are "training" data. The training data are provided with ground truth translation and rotation. In addition, field experiment data is collected by low-resolution LiDAR, VLP-16 in Wuhan Research and Innovation Center.


2021 ◽  
Author(s):  
Liang Zhang ◽  
Qiujin Xu ◽  
Xing Li ◽  
Xiaomin Zhao ◽  
Yongfang Qi ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document