Smartphone-Based Indoor Visual Navigation with Leader-Follower Mode

2021 ◽  
Vol 17 (2) ◽  
pp. 1-22
Author(s):  
Jingao Xu ◽  
Erqun Dong ◽  
Qiang Ma ◽  
Chenshu Wu ◽  
Zheng Yang

Existing indoor navigation solutions usually require pre-deployed comprehensive location services with precise indoor maps and, more importantly, all rely on dedicatedly installed or existing infrastructure. In this article, we present Pair-Navi, an infrastructure-free indoor navigation system that circumvents all these requirements by reusing a previous traveler’s (i.e., leader) trace experience to navigate future users (i.e., followers) in a Peer-to-Peer mode. Our system leverages the advances of visual simultaneous localization and mapping ( SLAM ) on commercial smartphones. Visual SLAM systems, however, are vulnerable to environmental dynamics in the precision and robustness and involve intensive computation that prohibits real-time applications. To combat environmental changes, we propose to cull non-rigid contexts and keep only the static and rigid contents in use. To enable real-time navigation on mobiles, we decouple and reorganize the highly coupled SLAM modules for leaders and followers. We implement Pair-Navi on commodity smartphones and validate its performance in three diverse buildings and two standard datasets (TUM and KITTI). Our results show that Pair-Navi achieves an immediate navigation success rate of 98.6%, which maintains as 83.4% even after 2 weeks since the leaders’ traces were collected, outperforming the state-of-the-art solutions by >50%. Being truly infrastructure-free, Pair-Navi sheds lights on practical indoor navigations for mobile users.

Indoor Navigation system is gaining lot of importance these days. It is particularly important to locate places inside a large university campus, Airport, Railway station or Museum. There are many mobile applications developed recently using different techniques. The work proposed in this paper is focusing on the need of visually challenged people while navigating in indoor environment. The approach proposed here implements the system using Beacon. The application developed with the system gives audio guidance to the user for navigation.


2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Xiaoji Niu ◽  
Tong Yu ◽  
Jian Tang ◽  
Le Chang

Multisensors (LiDAR/IMU/CAMERA) integrated Simultaneous Location and Mapping (SLAM) technology for navigation and mobile mapping in a GNSS-denied environment, such as indoor areas, dense forests, or urban canyons, becomes a promising solution. An online (real-time) version of such system can extremely extend its applications, especially for indoor mobile mapping. However, the real-time response issue of multisensors is a big challenge for an online SLAM system, due to the different sampling frequencies and processing time of different algorithms. In this paper, an online Extended Kalman Filter (EKF) integrated algorithm of LiDAR scan matching and IMU mechanization for Unmanned Ground Vehicle (UGV) indoor navigation system is introduced. Since LiDAR scan matching is considerably more time consuming than the IMU mechanism, the real-time synchronous issue is solved via a one-step-error-state-transition method in EKF. Stationary and dynamic field tests had been performed using a UGV platform along typical corridor of office building. Compared to the traditional sequential postprocessed EKF algorithm, the proposed method can significantly mitigate the time delay of navigation outputs under the premise of guaranteeing the positioning accuracy, which can be used as an online navigation solution for indoor mobile mapping.


Mathematics ◽  
2021 ◽  
Vol 9 (23) ◽  
pp. 3048
Author(s):  
Boyu Kuang ◽  
Mariusz Wisniewski ◽  
Zeeshan A. Rana ◽  
Yifan Zhao

Visual navigation is an essential part of planetary rover autonomy. Rock segmentation emerged as an important interdisciplinary topic among image processing, robotics, and mathematical modeling. Rock segmentation is a challenging topic for rover autonomy because of the high computational consumption, real-time requirement, and annotation difficulty. This research proposes a rock segmentation framework and a rock segmentation network (NI-U-Net++) to aid with the visual navigation of rovers. The framework consists of two stages: the pre-training process and the transfer-training process. The pre-training process applies the synthetic algorithm to generate the synthetic images; then, it uses the generated images to pre-train NI-U-Net++. The synthetic algorithm increases the size of the image dataset and provides pixel-level masks—both of which are challenges with machine learning tasks. The pre-training process accomplishes the state-of-the-art compared with the related studies, which achieved an accuracy, intersection over union (IoU), Dice score, and root mean squared error (RMSE) of 99.41%, 0.8991, 0.9459, and 0.0775, respectively. The transfer-training process fine-tunes the pre-trained NI-U-Net++ using the real-life images, which achieved an accuracy, IoU, Dice score, and RMSE of 99.58%, 0.7476, 0.8556, and 0.0557, respectively. Finally, the transfer-trained NI-U-Net++ is integrated into a planetary rover navigation vision and achieves a real-time performance of 32.57 frames per second (or the inference time is 0.0307 s per frame). The framework only manually annotates about 8% (183 images) of the 2250 images in the navigation vision, which is a labor-saving solution for rock segmentation tasks. The proposed rock segmentation framework and NI-U-Net++ improve the performance of the state-of-the-art models. The synthetic algorithm improves the process of creating valid data for the challenge of rock segmentation. All source codes, datasets, and trained models of this research are openly available in Cranfield Online Research Data (CORD).


2019 ◽  
Vol 113 (2) ◽  
pp. 140-155 ◽  
Author(s):  
Nicholas A. Giudice ◽  
William E. Whalen ◽  
Timothy H. Riehle ◽  
Shane M. Anderson ◽  
Stacy A. Doore

Introduction: This article describes an evaluation of MagNav, a speech-based, infrastructure-free indoor navigation system. The research was conducted in the Mall of America, the largest shopping mall in the United States, to empirically investigate the impact of memory load on route-guidance performance. Method: Twelve participants who are blind and 12 age-matched sighted controls participated in the study. Comparisons are made for route-guidance performance between use of updated, real-time route instructions (system-aided condition) and a system-unaided (memory-based condition) where the same instructions were only provided in advance of route travel. The sighted controls (who navigated under typical visual perception but used the system for route guidance) represent a best case comparison benchmark with the blind participants who used the system. Results: Results across all three test measures provide compelling behavioral evidence that blind navigators receiving real-time verbal information from the MagNav system performed route travel faster (navigation time), more accurately (fewer errors in reaching the destination), and more confidently (fewer requests for bystander assistance) compared to conditions where the same route information was only available to them in advance of travel. In addition, no statistically reliable differences were observed for any measure in the system-aided conditions between the blind and sighted participants. Posttest survey results corroborate the empirical findings, further supporting the efficacy of the MagNav system. Discussion: This research provides compelling quantitative and qualitative evidence showing the utility of an infrastructure-free, low-memory demand navigation system for supporting route guidance through complex indoor environments and supports the theory that functionally equivalent navigation performance is possible when access to real-time environmental information is available, irrespective of visual status. Implications for designers and practitioners: Findings provide insight for the importance of developers of accessible navigation systems to employ interfaces that minimize memory demands.


Sensors ◽  
2011 ◽  
Vol 11 (8) ◽  
pp. 7606-7624 ◽  
Author(s):  
Gabriel Girard ◽  
Stéphane Côté ◽  
Sisi Zlatanova ◽  
Yannick Barette ◽  
Johanne St-Pierre ◽  
...  

Many solutions have been proposed for indoor pedestrian navigation. Some rely on pre-installed sensor networks, which offer good accuracy but are limited to areas that have been prepared for that purpose, thus requiring an expensive and possibly time-consuming process. Such methods are therefore inappropriate for navigation in emergency situations since the power supply may be disturbed. Other types of solutions track the user without requiring a prepared environment. However, they may have low accuracy. Offline tracking has been proposed to increase accuracy, however this prevents users from knowing their position in real time. This paper describes a real time indoor navigation system that does not require prepared building environments and provides tracking accuracy superior to previously described tracking methods. The system uses a combination of four techniques: foot-mounted IMU (Inertial Motion Unit), ultrasonic ranging, particle filtering and model-based navigation. The very purpose of the project is to combine these four well-known techniques in a novel way to provide better indoor tracking results for pedestrians.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6355
Author(s):  
Muhammad Sualeh ◽  
Gon-Woo Kim

The idea of SLAM (Simultaneous Localization and Mapping) being a solved problem revolves around the static world assumption, even though autonomous systems are gaining environmental perception capabilities by exploiting the advances in computer vision and data-driven approaches. The computational demands and time complexities remain the main impediment in the effective fusion of the paradigms. In this paper, a framework to solve the dynamic SLAM problem is proposed. The dynamic regions of the scene are handled by making use of Visual-LiDAR based MODT (Multiple Object Detection and Tracking). Furthermore, minimal computational demands and real-time performance are ensured. The framework is tested on the KITTI Datasets and evaluated against the publicly available evaluation tools for a fair comparison with state-of-the-art SLAM algorithms. The results suggest that the proposed dynamic SLAM framework can perform in real-time with budgeted computational resources. In addition, the fused MODT provides rich semantic information that can be readily integrated into SLAM.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3442 ◽  
Author(s):  
Pei-Huang Diao ◽  
Naai-Jung Shih

Traditional egress routes are normally indicated on floor plans and function as designed, assuming that people can identify their relative location and orientation. However, the evacuation process can easily become complicated in a dark or hazardous environment with potential blockage of unexpected obstacles. This study developed the mobile AR indoor navigation system (MARINS) using a smartphone as a device to guide users to exits in a 0-lux setting with the path only illuminated by the phone camera’s LED. The system is developed using Apple ARKit SDK with the associated simultaneous localization and mapping (SLAM) function on a Unity platform in four modules. A maze scenario is planned in an environment built by carton walls. Time and distance traveled by the experimental group and the control group are measured. The results of statistical analysis demonstrate that the MARINS system can reduce travel time in known space and in total summation compared to the application of a traditional map. The system also reduces travel distance and misjudgments with higher system usability than the application of a traditional map.


Sign in / Sign up

Export Citation Format

Share Document