scholarly journals Object Identification and Safe Route Recommendation Based on Human Flow for the Visually Impaired

Sensors ◽  
2019 ◽  
Vol 19 (24) ◽  
pp. 5343 ◽  
Author(s):  
Yusuke Kajiwara ◽  
Haruhiko Kimura

It is difficult for visually impaired people to move indoors and outdoors. In 2018, world health organization (WHO) reported that there were about 253 million people around the world who were moderately visually impaired in distance vision. A navigation system that combines positioning and obstacle detection has been actively researched and developed. However, when these obstacle detection methods are used in high-traffic passages, since many pedestrians cause an occlusion problem that obstructs the shape and color of obstacles, these obstacle detection methods significantly decrease in accuracy. To solve this problem, we developed an application “Follow me!”. The application recommends a safe route by machine learning the gait and walking route of many pedestrians obtained from the monocular camera images of a smartphone. As a result of the experiment, pedestrians walking in the same direction as visually impaired people, oncoming pedestrians, and steps were identified with an average accuracy of 0.92 based on the gait and walking route of pedestrians acquired from monocular camera images. Furthermore, the results of the recommended safe route based on the identification results showed that the visually impaired people were guided to a safe route with 100% accuracy. In addition, visually impaired people avoided obstacles that had to be detoured during construction and signage by walking along the recommended route.

Technologies ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. 37 ◽  
Author(s):  
Mohamed Dhiaeddine Messaoudi ◽  
Bob-Antoine J. Menelas ◽  
Hamid Mcheick

According to the statistics provided by the World Health Organization, the number of people suffering from visual impairment is approximately 1.3 billion. The number of blind and visually impaired people is expected to increase over the coming years, and it is estimated to triple by the end of 2050 which is quite alarming. Keeping the needs and problems faced by the visually impaired people in mind, we have come up with a technological solution that is a “Smart Cane device” that can help people having sight impairment to navigate with ease and to avoid the risk factors surrounding them. Currently, the three main options available for blind people are using a white cane, technological tools and guide dogs. The solution that has been proposed in this article is using various technological tools to come up with a smart solution to the problem to facilitate the users’ life. The designed system mainly aims to facilitate indoor navigation using cloud computing and Internet of things (IoT) wireless scanners. The goal of developing the Smart Cane can be achieved by integrating various hardware and software systems. The proposed solution of a Smart Cane device aims to provide smooth displacement for the visually impaired people from one place to another and to provide them with a tool that can help them to communicate with their surrounding environment.


Author(s):  
Ramiz Salama ◽  
Ahmad Ayoub

Nowadays, blind or impaired people are facing a lot of problems in their daily life since it is not easy for them to move, which is very dangerous. There are about 37 million visually impaired people across the globe according to the World Health Organization. People with these problems mostly depend on others, for example, a friend, or their trained dog while movıng outside. Thus, we were motivated to develop a smart stick to solve this problem. The smart stick, integrated with an ultrasonic sensor, buzzer and vibrator, can detect obstacles in the path of the blind people. The buzzer and vibration motor are activated when any obstacle is detected to alert the blind person. This work proposes a low-cost ultrasonic smart blind stick for blind people so that they can move from one place to another in an easy, safe and independent manner. The system was designed and programmed using C language. Keywords: Arduino Uno, Arduino IDE, ultrasonic sensor, buzzer, motor.


2016 ◽  
Vol 10 (1) ◽  
pp. 11-26 ◽  
Author(s):  
Wai Lun Khoo ◽  
Zhigang Zhu

Purpose – The purpose of this paper is to provide an overview of navigational assistive technologies with various sensor modalities and alternative perception approaches for visually impaired people. It also examines the input and output of each technology, and provides a comparison between systems. Design/methodology/approach – The contributing authors along with their students thoroughly read and reviewed the referenced papers while under the guidance of domain experts and users evaluating each paper/technology based on a set of metrics adapted from universal and system design. Findings – After analyzing 13 multimodal assistive technologies, the authors found that the most popular sensors are optical, infrared, and ultrasonic. Similarly, the most popular actuators are audio and haptic. Furthermore, most systems use a combination of these sensors and actuators. Some systems are niche, while others strive to be universal. Research limitations/implications – This paper serves as a starting point for further research in benchmarking multimodal assistive technologies for the visually impaired and to eventually cultivate better assistive technologies for all. Social implications – Based on 2012 World Health Organization, there are 39 million blind people. This paper will have an insight of what kind of assistive technologies are available to the visually impaired people, whether in market or research lab. Originality/value – This paper provides a comparison across diverse visual assistive technologies. This is valuable to those who are developing assistive technologies and want to be aware of what is available as well their pros and cons, and the study of human-computer interfaces.


Author(s):  
Nachirat Rachburee ◽  
Wattana Punlumjeak

<span>The World Health Organization (WHO) reported in 2019 that at least 2.2 billion people were visual-impairment or blindness. The main problem of living for visually impaired people have been facing difficulties in moving even indoor or outdoor situations. Therefore, their lives are not safe and harmful. In this paper, we propose</span><span>d</span><span> an assistive application model based on deep learning: YOLOv3 with a Darknet-53 base network for visually impaired people on a smartphone. The Pascal VOC2007 and Pascal VOC2012 were used for the training set and used Pascal VOC2007 test set for validation. The assistive model was installed on a smartphone with an eSpeak synthesizer which generates the audio output to the user. The experimental result showed a high speed and also high detection accuracy. The proposed application with the help of technology will be an effective way to assist visually impaired people to interact with the surrounding environment in their daily life.</span>


Author(s):  
Sriraksha Nayak ◽  
Chandrakala C B

According to the World Health Organization estimation, globally the number of people with some visual impairment is estimated to be 285 million, of whom 39 million are blind.  The inability to use features such as sending and reading of email, schedule management, pathfinding or outdoor navigation, and reading SMS is a disadvantage for blind people in many professional and educational situations. Speech or text analysis can help improve support for visually-impaired people. Users can speak a command to perform a task. The spoken command will be interpreted by the Speech Recognition Engine (SRE) and can be converted into text or perform suitable actions. In this paper, an application that allows schedule management, emailing, and SMS reading completely based on voice command is proposed, implemented, and validated. The System hopes to provide blind people to simply speak the desired functionality and be guided thereby the system’s audio instructions. The proposed and designed app is implemented to support three languages which are English, Hindi, and Kannada.


Author(s):  
Evania Joycelin Anthony ◽  
Regina Anastasia Kusnadi

Globally around the world in 2010, the number of people of all ages visually impaired is estimated to be 285 million, of whom 39 million are blind according to the study of World Health Organization (Global Data on Visual Impairments, 2010). Visual impairment has a significant impact on individuals’ quality of life, including their ability to work and to develop personal relationships. Almost half (48 %) of the visually impaired feel “moderately” or “completely” cut off from people and things around them (Hakobyan, Lumsden, O’Sullivan, & Bartlett, 2013). We believe that technology has the potential to enhance individuals’ ability to participate fully in societal activities and to live independently. So, in this paper we focused to presents a comprehensive literature review about different algorithms of computer vision for supporting blind/vision impaired people, different devices used and the supported tasks. From the 13 eligible papers, we found positive effects of the use of computer vision for supporting visually impaired people. These effects included: the detection of obstacles, objects, door and text, traffic lights, sign detections and navigation. But the biggest challenge for developers now is to increase the speed of time and improve its accuracy, and we expect the future will have a complete package or solution where blind or vision impaired people will get all the solution together (i.e., map, indoor-outdoor navigation, object recognition, obstacle recognition, person recognition, human crowd behavior, crowd human counting, study/reading, entertainment etc.) in one software and in hand-held devices like android or any handy devices.


2015 ◽  
Vol 5 (3) ◽  
pp. 801-804
Author(s):  
M. Abdul-Niby ◽  
M. Alameen ◽  
O. Irscheid ◽  
M. Baidoun ◽  
H. Mourtada

In this paper, we present a low cost hands-free detection and avoidance system designed to provide mobility assistance for visually impaired people. An ultrasonic sensor is attached to the jacket of the user and detects the obstacles in front. The information obtained is transferred to the user through audio messages and also by a vibration. The range of the detection is user-defined. A text-to-speech module is employed for the voice signal. The proposed obstacle avoidance device is cost effective, easy to use and easily upgraded.


Entropy ◽  
2020 ◽  
Vol 22 (9) ◽  
pp. 941
Author(s):  
Rakesh Chandra Joshi ◽  
Saumya Yadav ◽  
Malay Kishore Dutta ◽  
Carlos M. Travieso-Gonzalez

Visually impaired people face numerous difficulties in their daily life, and technological interventions may assist them to meet these challenges. This paper proposes an artificial intelligence-based fully automatic assistive technology to recognize different objects, and auditory inputs are provided to the user in real time, which gives better understanding to the visually impaired person about their surroundings. A deep-learning model is trained with multiple images of objects that are highly relevant to the visually impaired person. Training images are augmented and manually annotated to bring more robustness to the trained model. In addition to computer vision-based techniques for object recognition, a distance-measuring sensor is integrated to make the device more comprehensive by recognizing obstacles while navigating from one place to another. The auditory information that is conveyed to the user after scene segmentation and obstacle identification is optimized to obtain more information in less time for faster processing of video frames. The average accuracy of this proposed method is 95.19% and 99.69% for object detection and recognition, respectively. The time complexity is low, allowing a user to perceive the surrounding scene in real time.


Sign in / Sign up

Export Citation Format

Share Document