Improved SfM-Based Indoor Localization with Occlusion Removal

Author(s):  
Yushi Li ◽  
George Baciu ◽  
Yu Han ◽  
Chenhui Li

This article describes a novel 3D image-based indoor localization system integrated with an improved SfM (structure from motion) approach and an obstacle removal component. In contrast with existing state-of-the-art localization techniques focusing on static outdoor or indoor environments, the adverse effects, generated by moving obstacles in busy indoor spaces, are considered in this work. In particular, the problem of occlusion removal is converted into a separation problem of moving foreground and static background. A low-rank and sparse matrix decomposition approach is used to solve this problem efficiently. Moreover, a SfM with RT (re-triangulation) is adopted in order to handle the drifting problem of incremental SfM method in indoor scene reconstruction. To evaluate the performance of the system, three data sets and the corresponding query sets are established to simulate different states of the indoor environment. Quantitative experimental results demonstrate that both query registration rate and localization accuracy increase significantly after integrating the authors' improvements.

2020 ◽  
Vol 125 ◽  
pp. 41-52
Author(s):  
Jishnu Sadasivan ◽  
Jitendra K. Dhiman ◽  
Chandra Sekhar Seelamantula

Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3406
Author(s):  
Jie Jiang ◽  
Yin Zou ◽  
Lidong Chen ◽  
Yujie Fang

Precise localization and pose estimation in indoor environments are commonly employed in a wide range of applications, including robotics, augmented reality, and navigation and positioning services. Such applications can be solved via visual-based localization using a pre-built 3D model. The increase in searching space associated with large scenes can be overcome by retrieving images in advance and subsequently estimating the pose. The majority of current deep learning-based image retrieval methods require labeled data, which increase data annotation costs and complicate the acquisition of data. In this paper, we propose an unsupervised hierarchical indoor localization framework that integrates an unsupervised network variational autoencoder (VAE) with a visual-based Structure-from-Motion (SfM) approach in order to extract global and local features. During the localization process, global features are applied for the image retrieval at the level of the scene map in order to obtain candidate images, and are subsequently used to estimate the pose from 2D-3D matches between query and candidate images. RGB images only are used as the input of the proposed localization system, which is both convenient and challenging. Experimental results reveal that the proposed method can localize images within 0.16 m and 4° in the 7-Scenes data sets and 32.8% within 5 m and 20° in the Baidu data set. Furthermore, our proposed method achieves a higher precision compared to advanced methods.


2018 ◽  
Vol 15 (8) ◽  
pp. 118-125
Author(s):  
Junsheng Mu ◽  
Xiaojun Jing ◽  
Hai Huang ◽  
Ning Gao

ETRI Journal ◽  
2014 ◽  
Vol 36 (1) ◽  
pp. 167-170 ◽  
Author(s):  
Jianjun Huang ◽  
Xiongwei Zhang ◽  
Yafei Zhang ◽  
Xia Zou ◽  
Li Zeng

2018 ◽  
Vol 35 (11) ◽  
pp. 1549-1566 ◽  
Author(s):  
Zhichao Xue ◽  
Jing Dong ◽  
Yuxin Zhao ◽  
Chang Liu ◽  
Ryad Chellali

Sign in / Sign up

Export Citation Format

Share Document