Let's Meet: A Smartphone Co-Navigation System Based on Relative Direction and Proximity Change for Indoor Environments

Author(s):  
Wensong Li ◽  
Ying Chen ◽  
Chanxin Zhou ◽  
Bang Wang
Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6238
Author(s):  
Payal Mahida ◽  
Seyed Shahrestani ◽  
Hon Cheung

Wayfinding and navigation can present substantial challenges to visually impaired (VI) people. Some of the significant aspects of these challenges arise from the difficulty of knowing the location of a moving person with enough accuracy. Positioning and localization in indoor environments require unique solutions. Furthermore, positioning is one of the critical aspects of any navigation system that can assist a VI person with their independent movement. The other essential features of a typical indoor navigation system include pathfinding, obstacle avoidance, and capabilities for user interaction. This work focuses on the positioning of a VI person with enough precision for their use in indoor navigation. We aim to achieve this by utilizing only the capabilities of a typical smartphone. More specifically, our proposed approach is based on the use of the accelerometer, gyroscope, and magnetometer of a smartphone. We consider the indoor environment to be divided into microcells, with the vertex of each microcell being assigned two-dimensional local coordinates. A regression-based analysis is used to train a multilayer perceptron neural network to map the inertial sensor measurements to the coordinates of the vertex of the microcell corresponding to the position of the smartphone. In order to test our proposed solution, we used IPIN2016, a publicly-available multivariate dataset that divides the indoor environment into cells tagged with the inertial sensor data of a smartphone, in order to generate the training and validating sets. Our experiments show that our proposed approach can achieve a remarkable prediction accuracy of more than 94%, with a 0.65 m positioning error.


2016 ◽  
Vol 14 (1) ◽  
pp. 172988141667813 ◽  
Author(s):  
Clara Gomez ◽  
Alejandra Carolina Hernandez ◽  
Jonathan Crespo ◽  
Ramon Barber

The aim of the work presented in this article is to develop a navigation system that allows a mobile robot to move autonomously in an indoor environment using perceptions of multiple events. A topological navigation system based on events that imitates human navigation using sensorimotor abilities and sensorial events is presented. The increasing interest in building autonomous mobile systems makes the detection and recognition of perceptions a crucial task. The system proposed can be considered a perceptive navigation system as the navigation process is based on perception and recognition of natural and artificial landmarks, among others. The innovation of this work resides in the use of an integration interface to handle multiple events concurrently, leading to a more complete and advanced navigation system. The developed architecture enhances the integration of new elements due to its modularity and the decoupling between modules. Finally, experiments have been carried out in several mobile robots, and their results show the feasibility of the navigation system proposed and the effectiveness of the sensorial data integration managed as events.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2477 ◽  
Author(s):  
Kamal M. Othman ◽  
Ahmad B. Rad

In this paper, we propose a novel algorithm to detect a door and its orientation in indoor settings from the view of a social robot equipped with only a monocular camera. The challenge is to achieve this goal with only a 2D image from a monocular camera. The proposed system is designed through the integration of several modules, each of which serves a special purpose. The detection of the door is addressed by training a convolutional neural network (CNN) model on a new dataset for Social Robot Indoor Navigation (SRIN). The direction of the door (from the robot’s observation) is achieved by three other modules: Depth module, Pixel-Selection module, and Pixel2Angle module, respectively. We include simulation results and real-time experiments to demonstrate the performance of the algorithm. The outcome of this study could be beneficial in any robotic navigation system for indoor environments.


2021 ◽  
pp. 1-10
Author(s):  
Zi-Hao Wang ◽  
Kai-Yu Qin ◽  
Te Zhang ◽  
Bo Zhu

In the future, heterogeneous robots are expected to perform more complex tasks in a cooperative manner, and the onboard navigation system is required to be capable of working safely and effectively in the area where GNSS signal is weak or even could not be received. To demonstrate this concept, we have developed a cooperative navigation system by the use of Ground-Aerial Vehicle Cooperation. The key innovations of the development lie in the following aspects: (1) a local scalable self-organizing network is constructed for data interaction between a small UAV and a reusable ground robot; (2) a new navigation framework is proposed to achieve visual simultaneous localization and mapping (SLAM) where carrying capacity of both the ground vehicle and UAV are systematically considered; (3) an octomap-based 3D environment reconstruction method is developed to achieve map pre-establishment in complex navigation environments, and the classic ORB-SLAM2 system is improved to be adaptive to actual environment exploration and perception. In-door flight experiments demonstrate the effectiveness of the proposed solution. More interestingly, by implementing a centroid tracking algorithm, the cooperative system is further capable of tracking a man moving in indoor environments.


2011 ◽  
Vol 301-303 ◽  
pp. 201-207
Author(s):  
Sheng Bei Wang ◽  
Jian Ming Wang ◽  
Xi Wang ◽  
Ling Ma ◽  
Ru Zhen Dou ◽  
...  

Vision for navigation had been an active area of research for more than three decades, and a vision-based navigation system always needs real-time image collecting and processing to acquire navigation information. In the indoor scenarios, illuminant reflection is often found in navigation images because of smooth surfaces in the environment, such as marble floor, planes of furniture and etc. The negative effect of illuminant reflection in navigation images is obvious and might lower the performance of the navigation system, As to resolve the above problem, the issue of how to detect illuminant reflection should be considered, This paper proposed an automatic detection algorithm to segment illuminant reflection regions in a color image using saturation and brightness characteristics as well as the brightness distribution of the illuminant reflective regions. As to verify the robustness and accuracy of this algorithm, experiments were carried out in different indoor environments where illuminant reflection is found in navigation images, the experiments results we obtained indicated that this algorithm is sufficient to handle the problem with providing good detection results as expected.


Sensors ◽  
2015 ◽  
Vol 16 (1) ◽  
pp. 17 ◽  
Author(s):  
Georg Gerstweiler ◽  
Emanuel Vonach ◽  
Hannes Kaufmann

Sign in / Sign up

Export Citation Format

Share Document