Registration using adaptive particle filter and natural features matching techniques for augmented reality systems

2009 ◽  
Vol 29 (1) ◽  
pp. 75-84
Author(s):  
Guan Tao ◽  
Li Lijun ◽  
Liu Wei ◽  
Wang Cheng

PurposeThe purpose of this paper is to provide a flexible registration method for markerless augmented reality (AR) systems.Design/methodology/approachThe proposed method distinguishes itself as follows: firstly, the method is simple and efficient, as no man‐made markers are needed for both indoor and outdoor AR applications. Secondly, an adaptation method is presented to tune the particle filter dynamically. The result is a system which can achieve tolerance to fast motion and drift during tracking process. Thirdly, the authors use the reduced scale invariant feature transform (SIFT) and scale prediction techniques to match natural features. This method deals easily with the camera pose estimation problem in the case of large illumination and visual angle changes.FindingsSome experiments are provided to validate the performance of the proposed method.Originality/valueThe paper proposes a novel camera pose estimation method based on adaptive particle filter and natural features matching techniques.

2009 ◽  
Vol 28 (10) ◽  
pp. 2679-2682
Author(s):  
Wei LIU ◽  
Li-jun LI ◽  
Jun HAN ◽  
Tao GUAN

2020 ◽  
Vol 10 (24) ◽  
pp. 8866
Author(s):  
Sangyoon Lee ◽  
Hyunki Hong ◽  
Changkyoung Eem

Deep learning has been utilized in end-to-end camera pose estimation. To improve the performance, we introduce a camera pose estimation method based on a 2D-3D matching scheme with two convolutional neural networks (CNNs). The scene is divided into voxels, whose size and number are computed according to the scene volume and the number of 3D points. We extract inlier points from the 3D point set in a voxel using random sample consensus (RANSAC)-based plane fitting to obtain a set of interest points consisting of a major plane. These points are subsequently reprojected onto the image using the ground truth camera pose, following which a polygonal region is identified in each voxel using the convex hull. We designed a training dataset for 2D–3D matching, consisting of inlier 3D points, correspondence across image pairs, and the voxel regions in the image. We trained the hierarchical learning structure with two CNNs on the dataset architecture to detect the voxel regions and obtain the location/description of the interest points. Following successful 2D–3D matching, the camera pose was estimated using n-point pose solver in RANSAC. The experiment results show that our method can estimate the camera pose more precisely than previous end-to-end estimators.


2011 ◽  
Vol 31 (3) ◽  
pp. 56-68 ◽  
Author(s):  
Tao Guan ◽  
Liya Duan ◽  
Junqing Yu ◽  
Yongjian Chen ◽  
Xu Zhang

Sign in / Sign up

Export Citation Format

Share Document