Egocentric Landmark-Based Indoor Guidance System for the Visually Impaired

Author(s):  
Zhuorui Yang ◽  
Aura Ganz

In this paper, we introduce an egocentric landmark-based guidance system that enables visually impaired users to interact with indoor environments. The user who wears Google Glasses will capture his surroundings within his field of view. Using this information, we provide the user an accurate landmark-based description of the environment including his relative distance and orientation to each landmark. To achieve this functionality, we developed a near real time accurate vision based localization algorithm. Since the users are visually impaired our algorithm accounts for captured images using Google Glasses that have severe blurriness, motion blurriness, low illumination intensity and crowd obstruction. We tested the algorithm performance in a 12,000 ft2 open indoor environment. When we have mint query images our algorithm obtains mean location accuracy within 5ft., mean orientation accuracy less than 2 degrees and reliability above 88%. After applying deformation effects to the query images such blurriness, motion blurriness and illumination changes, we observe that the reliability is above 75%.

2018 ◽  
pp. 1483-1499
Author(s):  
Zhuorui Yang ◽  
Aura Ganz

In this paper, we introduce an egocentric landmark-based guidance system that enables visually impaired users to interact with indoor environments. The user who wears Google Glasses will capture his surroundings within his field of view. Using this information, we provide the user an accurate landmark-based description of the environment including his relative distance and orientation to each landmark. To achieve this functionality, we developed a near real time accurate vision based localization algorithm. Since the users are visually impaired our algorithm accounts for captured images using Google Glasses that have severe blurriness, motion blurriness, low illumination intensity and crowd obstruction. We tested the algorithm performance in a 12,000 ft2 open indoor environment. When we have mint query images our algorithm obtains mean location accuracy within 5ft., mean orientation accuracy less than 2 degrees and reliability above 88%. After applying deformation effects to the query images such blurriness, motion blurriness and illumination changes, we observe that the reliability is above 75%.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6238
Author(s):  
Payal Mahida ◽  
Seyed Shahrestani ◽  
Hon Cheung

Wayfinding and navigation can present substantial challenges to visually impaired (VI) people. Some of the significant aspects of these challenges arise from the difficulty of knowing the location of a moving person with enough accuracy. Positioning and localization in indoor environments require unique solutions. Furthermore, positioning is one of the critical aspects of any navigation system that can assist a VI person with their independent movement. The other essential features of a typical indoor navigation system include pathfinding, obstacle avoidance, and capabilities for user interaction. This work focuses on the positioning of a VI person with enough precision for their use in indoor navigation. We aim to achieve this by utilizing only the capabilities of a typical smartphone. More specifically, our proposed approach is based on the use of the accelerometer, gyroscope, and magnetometer of a smartphone. We consider the indoor environment to be divided into microcells, with the vertex of each microcell being assigned two-dimensional local coordinates. A regression-based analysis is used to train a multilayer perceptron neural network to map the inertial sensor measurements to the coordinates of the vertex of the microcell corresponding to the position of the smartphone. In order to test our proposed solution, we used IPIN2016, a publicly-available multivariate dataset that divides the indoor environment into cells tagged with the inertial sensor data of a smartphone, in order to generate the training and validating sets. Our experiments show that our proposed approach can achieve a remarkable prediction accuracy of more than 94%, with a 0.65 m positioning error.


2019 ◽  
Vol 9 (21) ◽  
pp. 4656 ◽  
Author(s):  
Haikel Alhichri ◽  
Yakoub Bazi ◽  
Naif Alajlan ◽  
Bilel Bin Jdira

This work presents a deep learning method for scene description. (1) Background: This method is part of a larger system, called BlindSys, that assists the visually impaired in an indoor environment. The method detects the presence of certain objects, regardless of their position in the scene. This problem is also known as image multi-labeling. (2) Methods: Our proposed deep learning solution is based on a light-weight pre-trained CNN called SqueezeNet. We improved the SqueezeNet architecture by resetting the last convolutional layer to free weights, replacing its activation function from a rectified linear unit (ReLU) to a LeakyReLU, and adding a BatchNormalization layer thereafter. We also replaced the activation functions at the output layer from softmax to linear functions. These adjustments make up the main contributions in this work. (3) Results: The proposed solution is tested on four image multi-labeling datasets representing different indoor environments. It has achieved results better than state-of-the-art solutions both in terms of accuracy and processing time. (4) Conclusions: The proposed deep CNN is an effective solution for predicting the presence of objects in a scene and can be successfully used as a module within BlindSys.


2020 ◽  
Vol 24 (03) ◽  
pp. 515-520
Author(s):  
Vattumilli Komal Venugopal ◽  
Alampally Naveen ◽  
Rajkumar R ◽  
Govinda K ◽  
Jolly Masih

2021 ◽  
Vol 9 (3) ◽  
pp. 277
Author(s):  
Isaac Segovia Ramírez ◽  
Pedro José Bernalte Sánchez ◽  
Mayorkinos Papaelias ◽  
Fausto Pedro García Márquez

Submarine inspections and surveys require underwater vehicles to operate in deep waters efficiently, safely and reliably. Autonomous Underwater Vehicles employing advanced navigation and control systems present several advantages. Robust control algorithms and novel improvements in positioning and navigation are needed to optimize underwater operations. This paper proposes a new general formulation of this problem together with a basic approach for the management of deep underwater operations. This approach considers the field of view and the operational requirements as a fundamental input in the development of the trajectory in the autonomous guidance system. The constraints and involved variables are also defined, providing more accurate modelling compared with traditional formulations of the positioning system. Different case studies are presented based on commercial underwater cameras/sonars, analysing the influence of the main variables in the measurement process to obtain optimal resolution results. The application of this approach in autonomous underwater operations ensures suitable data acquisition processes according to the payload installed onboard.


Author(s):  
Laurentiu Predescu ◽  
Daniel Dunea

Optical monitors have proven their versatility into the studies of air quality in the workplace and indoor environments. The current study aimed to perform a screening of the indoor environment regarding the presence of various fractions of particulate matter (PM) and the specific thermal microclimate in a classroom occupied with students in March 2019 (before COVID-19 pandemic) and in March 2021 (during pandemic) at Valahia University Campus, Targoviste, Romania. The objectives were to assess the potential exposure of students and academic personnel to PM and to observe the performances of various sensors and monitors (particle counter, PM monitors, and indoor microclimate sensors). PM1 ranged between 29 and 41 μg m−3 and PM10 ranged between 30 and 42 μg m−3. It was observed that the particles belonged mostly to fine and submicrometric fractions in acceptable thermal environments according to the PPD and PMV indices. The particle counter recorded preponderantly 0.3, 0.5, and 1.0 micron categories. The average acute dose rate was estimated as 6.58 × 10−4 mg/kg-day (CV = 14.3%) for the 20–40 years range. Wearing masks may influence the indoor microclimate and PM levels but additional experiments should be performed at a finer scale.


2017 ◽  
Vol 14 (5) ◽  
pp. 172988141773275 ◽  
Author(s):  
Francisco J Perez-Grau ◽  
Fernando Caballero ◽  
Antidio Viguria ◽  
Anibal Ollero

This article presents an enhanced version of the Monte Carlo localization algorithm, commonly used for robot navigation in indoor environments, which is suitable for aerial robots moving in a three-dimentional environment and makes use of a combination of measurements from an Red,Green,Blue-Depth (RGB-D) sensor, distances to several radio-tags placed in the environment, and an inertial measurement unit. The approach is demonstrated with an unmanned aerial vehicle flying for 10 min indoors and validated with a very precise motion tracking system. The approach has been implemented using the robot operating system framework and works smoothly on a regular i7 computer, leaving plenty of computational capacity for other navigation tasks such as motion planning or control.


2014 ◽  
Vol 989-994 ◽  
pp. 2232-2236 ◽  
Author(s):  
Jia Zhi Dong ◽  
Yu Wen Wang ◽  
Feng Wei ◽  
Jiang Yu

Currently, there is an urgent need for indoor positioning technology. Considering the complexity of indoor environment, this paper proposes a new positioning algorithm (N-CHAN) via the analysis of the error of arrival time positioning (TOA) and the channels of S-V model. It overcomes an obvious shortcoming that the accuracy of traditional CHAN algorithm effected by no-line-of-sight (NLOS). Finally, though MATLAB software simulation, we prove that N-CHAN’s superior performance in NLOS in the S-V channel model, which has a positioning accuracy of centimeter-level and can effectively eliminate the influence of NLOS error on positioning accuracy. Moreover, the N-CHAN can effectively improve the positioning accuracy of the system, especially in the conditions of larger NLOS error.


Sign in / Sign up

Export Citation Format

Share Document