scholarly journals An Obstacle Detection and Guidance System for Mobility of Visually Impaired in Unfamiliar Indoor Environments

2014 ◽  
Vol 6 (4) ◽  
pp. 337-341 ◽  
Author(s):  
Monther M. Al-Shehabi ◽  
Mustahsan Mir ◽  
Abdullah M. Ali ◽  
Ahmed M. Ali
2021 ◽  
Author(s):  
Tamas Nemes

This work describes a new type of portable, self-regulating guidance system, which learns to recognize obstacles with the help of a camera, artificial intelligence, and various sensors and thus warn the wearer through audio signals. For obstacle detection, a MobileNetV2 model with an SSD attachment is used which was trained on a custom dataset. Moreover, the system uses the data of motion and distance sensors to improve accuracy. Experimental results confirm that the system can operate with 74.9% mAP accuracy and a reaction time of 0.15 seconds, meeting the performance standard for modern object detection applications. It will also be presented how those affected commented on the device and how the system could be transformed into a marketable product.


Author(s):  
Zhuorui Yang ◽  
Aura Ganz

In this paper, we introduce an egocentric landmark-based guidance system that enables visually impaired users to interact with indoor environments. The user who wears Google Glasses will capture his surroundings within his field of view. Using this information, we provide the user an accurate landmark-based description of the environment including his relative distance and orientation to each landmark. To achieve this functionality, we developed a near real time accurate vision based localization algorithm. Since the users are visually impaired our algorithm accounts for captured images using Google Glasses that have severe blurriness, motion blurriness, low illumination intensity and crowd obstruction. We tested the algorithm performance in a 12,000 ft2 open indoor environment. When we have mint query images our algorithm obtains mean location accuracy within 5ft., mean orientation accuracy less than 2 degrees and reliability above 88%. After applying deformation effects to the query images such blurriness, motion blurriness and illumination changes, we observe that the reliability is above 75%.


Author(s):  
SANG-WOONG LEE ◽  
SEONGHOON KANG ◽  
SEONG-WHAN LEE

In this paper, we present a walking guidance system for the visually impaired pedestrians. The system has been designed to help the visually impaired by responding intelligently to various situations that can occur in unrestricted natural outdoor environments when walking and finding the destinations. It involves the main functions of people detection, text recognition, face recognition. In addition, added sophisticated functions of walking path guidance using Differential Global Positioning System, obstacle detection using a stereo camera and voice user, interface are included. In order to operate all functions concurrently, we develop approaches in real situations and integrate them. Finally, we experiment on a prototype system under natural environments in order to verify our approaches. The results show that our approaches are applicable to real situations.


2018 ◽  
pp. 1483-1499
Author(s):  
Zhuorui Yang ◽  
Aura Ganz

In this paper, we introduce an egocentric landmark-based guidance system that enables visually impaired users to interact with indoor environments. The user who wears Google Glasses will capture his surroundings within his field of view. Using this information, we provide the user an accurate landmark-based description of the environment including his relative distance and orientation to each landmark. To achieve this functionality, we developed a near real time accurate vision based localization algorithm. Since the users are visually impaired our algorithm accounts for captured images using Google Glasses that have severe blurriness, motion blurriness, low illumination intensity and crowd obstruction. We tested the algorithm performance in a 12,000 ft2 open indoor environment. When we have mint query images our algorithm obtains mean location accuracy within 5ft., mean orientation accuracy less than 2 degrees and reliability above 88%. After applying deformation effects to the query images such blurriness, motion blurriness and illumination changes, we observe that the reliability is above 75%.


2020 ◽  
Vol 24 (03) ◽  
pp. 515-520
Author(s):  
Vattumilli Komal Venugopal ◽  
Alampally Naveen ◽  
Rajkumar R ◽  
Govinda K ◽  
Jolly Masih

Author(s):  
Shrugal Varde* ◽  
◽  
Dr. M.S. Panse ◽  

This paper introduces a novel travel for blind users that can assist them to detects location of doors in corridors and also give information about location of stairs. The developed system uses camera to capture images in front of the user. Feature extraction algorithm is used to extract key features that distinguish doors and stairs from other structures observed in indoor environments. This information is then conveyed to the user using simple auditory feedback. The mobility aid was validated on 50 visually impaired users. The subjects walked in a controlled test environment. The accuracy of the device to help the user detect doors and stairs was determined. The results obtained were satisfactory and the device has the potential for use in standalone mode for indoor navigations.


2015 ◽  
Vol 5 (3) ◽  
pp. 801-804
Author(s):  
M. Abdul-Niby ◽  
M. Alameen ◽  
O. Irscheid ◽  
M. Baidoun ◽  
H. Mourtada

In this paper, we present a low cost hands-free detection and avoidance system designed to provide mobility assistance for visually impaired people. An ultrasonic sensor is attached to the jacket of the user and detects the obstacles in front. The information obtained is transferred to the user through audio messages and also by a vibration. The range of the detection is user-defined. A text-to-speech module is employed for the voice signal. The proposed obstacle avoidance device is cost effective, easy to use and easily upgraded.


Author(s):  
Fredy Martinez ◽  
Edwar Jacinto ◽  
Fernando Martinez

This paper presents a low cost strategy for real-time estimation of the position of obstacles in an unknown environment for autonomous robots. The strategy was intended for use in autonomous service robots, which navigate in unknown and dynamic indoor environments. In addition to human interaction, these environments are characterized by a design created for the human being, which is why our developments seek morphological and functional similarity equivalent to the human model. We use a pair of cameras on our robot to achieve a stereoscopic vision of the environment, and we analyze this information to determine the distance to obstacles using an algorithm that mimics bacterial behavior. The algorithm was evaluated on our robotic platform demonstrating high performance in the location of obstacles and real-time operation.


Sign in / Sign up

Export Citation Format

Share Document