scholarly journals An Online Solution of LiDAR Scan Matching Aided Inertial Navigation System for Indoor Mobile Mapping

2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Xiaoji Niu ◽  
Tong Yu ◽  
Jian Tang ◽  
Le Chang

Multisensors (LiDAR/IMU/CAMERA) integrated Simultaneous Location and Mapping (SLAM) technology for navigation and mobile mapping in a GNSS-denied environment, such as indoor areas, dense forests, or urban canyons, becomes a promising solution. An online (real-time) version of such system can extremely extend its applications, especially for indoor mobile mapping. However, the real-time response issue of multisensors is a big challenge for an online SLAM system, due to the different sampling frequencies and processing time of different algorithms. In this paper, an online Extended Kalman Filter (EKF) integrated algorithm of LiDAR scan matching and IMU mechanization for Unmanned Ground Vehicle (UGV) indoor navigation system is introduced. Since LiDAR scan matching is considerably more time consuming than the IMU mechanism, the real-time synchronous issue is solved via a one-step-error-state-transition method in EKF. Stationary and dynamic field tests had been performed using a UGV platform along typical corridor of office building. Compared to the traditional sequential postprocessed EKF algorithm, the proposed method can significantly mitigate the time delay of navigation outputs under the premise of guaranteeing the positioning accuracy, which can be used as an online navigation solution for indoor mobile mapping.

Indoor Navigation system is gaining lot of importance these days. It is particularly important to locate places inside a large university campus, Airport, Railway station or Museum. There are many mobile applications developed recently using different techniques. The work proposed in this paper is focusing on the need of visually challenged people while navigating in indoor environment. The approach proposed here implements the system using Beacon. The application developed with the system gives audio guidance to the user for navigation.


Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5890
Author(s):  
Bo-Chen Huang ◽  
Jiun Hsu ◽  
Edward T.-H. Chu ◽  
Hui-Mei Wu

Due to the popularity of indoor positioning technology, indoor navigation applications have been deployed in large buildings, such as hospitals, airports, and train stations, to guide visitors to their destinations. A commonly-used user interface, shown on smartphones, is a 2D floor map with a route to the destination. The navigation instructions, such as turn left, turn right, and go straight, pop up on the screen when users come to an intersection. However, owing to the restrictions of a 2D navigation map, users may face mental pressure and get confused while they are making a connection between the real environment and the 2D navigation map before moving forward. For this reason, we developed ARBIN, an augmented reality-based navigation system, which posts navigation instructions on the screen of real-world environments for ease of use. Thus, there is no need for users to make a connection between the navigation instructions and the real-world environment. In order to evaluate the applicability of ARBIN, a series of experiments were conducted in the outpatient area of the National Taiwan University Hospital YunLin Branch, which is nearly 1800 m2, with 35 destinations and points of interests, such as a cardiovascular clinic, x-ray examination room, pharmacy, and so on. Four different types of smartphone were adopted for evaluation. Our results show that ARBIN can achieve 3 to 5 m accuracy, and provide users with correct instructions on their way to the destinations. ARBIN proved to be a practical solution for indoor navigation, especially for large buildings.


2019 ◽  
Vol 113 (2) ◽  
pp. 140-155 ◽  
Author(s):  
Nicholas A. Giudice ◽  
William E. Whalen ◽  
Timothy H. Riehle ◽  
Shane M. Anderson ◽  
Stacy A. Doore

Introduction: This article describes an evaluation of MagNav, a speech-based, infrastructure-free indoor navigation system. The research was conducted in the Mall of America, the largest shopping mall in the United States, to empirically investigate the impact of memory load on route-guidance performance. Method: Twelve participants who are blind and 12 age-matched sighted controls participated in the study. Comparisons are made for route-guidance performance between use of updated, real-time route instructions (system-aided condition) and a system-unaided (memory-based condition) where the same instructions were only provided in advance of route travel. The sighted controls (who navigated under typical visual perception but used the system for route guidance) represent a best case comparison benchmark with the blind participants who used the system. Results: Results across all three test measures provide compelling behavioral evidence that blind navigators receiving real-time verbal information from the MagNav system performed route travel faster (navigation time), more accurately (fewer errors in reaching the destination), and more confidently (fewer requests for bystander assistance) compared to conditions where the same route information was only available to them in advance of travel. In addition, no statistically reliable differences were observed for any measure in the system-aided conditions between the blind and sighted participants. Posttest survey results corroborate the empirical findings, further supporting the efficacy of the MagNav system. Discussion: This research provides compelling quantitative and qualitative evidence showing the utility of an infrastructure-free, low-memory demand navigation system for supporting route guidance through complex indoor environments and supports the theory that functionally equivalent navigation performance is possible when access to real-time environmental information is available, irrespective of visual status. Implications for designers and practitioners: Findings provide insight for the importance of developers of accessible navigation systems to employ interfaces that minimize memory demands.


2020 ◽  
Vol 13 (1) ◽  
pp. 27
Author(s):  
Shaaban Ali Salman ◽  
Qais A. Khasawneh ◽  
Mohammad A. Jaradat ◽  
Mansour Y. Alramlawi

Sign in / Sign up

Export Citation Format

Share Document