scholarly journals Computer Vision for Supporting Visually Impaired People: A Systematic Review

Author(s):  
Evania Joycelin Anthony ◽  
Regina Anastasia Kusnadi

Globally around the world in 2010, the number of people of all ages visually impaired is estimated to be 285 million, of whom 39 million are blind according to the study of World Health Organization (Global Data on Visual Impairments, 2010). Visual impairment has a significant impact on individuals’ quality of life, including their ability to work and to develop personal relationships. Almost half (48 %) of the visually impaired feel “moderately” or “completely” cut off from people and things around them (Hakobyan, Lumsden, O’Sullivan, & Bartlett, 2013). We believe that technology has the potential to enhance individuals’ ability to participate fully in societal activities and to live independently. So, in this paper we focused to presents a comprehensive literature review about different algorithms of computer vision for supporting blind/vision impaired people, different devices used and the supported tasks. From the 13 eligible papers, we found positive effects of the use of computer vision for supporting visually impaired people. These effects included: the detection of obstacles, objects, door and text, traffic lights, sign detections and navigation. But the biggest challenge for developers now is to increase the speed of time and improve its accuracy, and we expect the future will have a complete package or solution where blind or vision impaired people will get all the solution together (i.e., map, indoor-outdoor navigation, object recognition, obstacle recognition, person recognition, human crowd behavior, crowd human counting, study/reading, entertainment etc.) in one software and in hand-held devices like android or any handy devices.

Sensors ◽  
2019 ◽  
Vol 19 (24) ◽  
pp. 5343 ◽  
Author(s):  
Yusuke Kajiwara ◽  
Haruhiko Kimura

It is difficult for visually impaired people to move indoors and outdoors. In 2018, world health organization (WHO) reported that there were about 253 million people around the world who were moderately visually impaired in distance vision. A navigation system that combines positioning and obstacle detection has been actively researched and developed. However, when these obstacle detection methods are used in high-traffic passages, since many pedestrians cause an occlusion problem that obstructs the shape and color of obstacles, these obstacle detection methods significantly decrease in accuracy. To solve this problem, we developed an application “Follow me!”. The application recommends a safe route by machine learning the gait and walking route of many pedestrians obtained from the monocular camera images of a smartphone. As a result of the experiment, pedestrians walking in the same direction as visually impaired people, oncoming pedestrians, and steps were identified with an average accuracy of 0.92 based on the gait and walking route of pedestrians acquired from monocular camera images. Furthermore, the results of the recommended safe route based on the identification results showed that the visually impaired people were guided to a safe route with 100% accuracy. In addition, visually impaired people avoided obstacles that had to be detoured during construction and signage by walking along the recommended route.


Technologies ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. 37 ◽  
Author(s):  
Mohamed Dhiaeddine Messaoudi ◽  
Bob-Antoine J. Menelas ◽  
Hamid Mcheick

According to the statistics provided by the World Health Organization, the number of people suffering from visual impairment is approximately 1.3 billion. The number of blind and visually impaired people is expected to increase over the coming years, and it is estimated to triple by the end of 2050 which is quite alarming. Keeping the needs and problems faced by the visually impaired people in mind, we have come up with a technological solution that is a “Smart Cane device” that can help people having sight impairment to navigate with ease and to avoid the risk factors surrounding them. Currently, the three main options available for blind people are using a white cane, technological tools and guide dogs. The solution that has been proposed in this article is using various technological tools to come up with a smart solution to the problem to facilitate the users’ life. The designed system mainly aims to facilitate indoor navigation using cloud computing and Internet of things (IoT) wireless scanners. The goal of developing the Smart Cane can be achieved by integrating various hardware and software systems. The proposed solution of a Smart Cane device aims to provide smooth displacement for the visually impaired people from one place to another and to provide them with a tool that can help them to communicate with their surrounding environment.


Author(s):  
Ramiz Salama ◽  
Ahmad Ayoub

Nowadays, blind or impaired people are facing a lot of problems in their daily life since it is not easy for them to move, which is very dangerous. There are about 37 million visually impaired people across the globe according to the World Health Organization. People with these problems mostly depend on others, for example, a friend, or their trained dog while movıng outside. Thus, we were motivated to develop a smart stick to solve this problem. The smart stick, integrated with an ultrasonic sensor, buzzer and vibrator, can detect obstacles in the path of the blind people. The buzzer and vibration motor are activated when any obstacle is detected to alert the blind person. This work proposes a low-cost ultrasonic smart blind stick for blind people so that they can move from one place to another in an easy, safe and independent manner. The system was designed and programmed using C language. Keywords: Arduino Uno, Arduino IDE, ultrasonic sensor, buzzer, motor.


2016 ◽  
Vol 10 (1) ◽  
pp. 11-26 ◽  
Author(s):  
Wai Lun Khoo ◽  
Zhigang Zhu

Purpose – The purpose of this paper is to provide an overview of navigational assistive technologies with various sensor modalities and alternative perception approaches for visually impaired people. It also examines the input and output of each technology, and provides a comparison between systems. Design/methodology/approach – The contributing authors along with their students thoroughly read and reviewed the referenced papers while under the guidance of domain experts and users evaluating each paper/technology based on a set of metrics adapted from universal and system design. Findings – After analyzing 13 multimodal assistive technologies, the authors found that the most popular sensors are optical, infrared, and ultrasonic. Similarly, the most popular actuators are audio and haptic. Furthermore, most systems use a combination of these sensors and actuators. Some systems are niche, while others strive to be universal. Research limitations/implications – This paper serves as a starting point for further research in benchmarking multimodal assistive technologies for the visually impaired and to eventually cultivate better assistive technologies for all. Social implications – Based on 2012 World Health Organization, there are 39 million blind people. This paper will have an insight of what kind of assistive technologies are available to the visually impaired people, whether in market or research lab. Originality/value – This paper provides a comparison across diverse visual assistive technologies. This is valuable to those who are developing assistive technologies and want to be aware of what is available as well their pros and cons, and the study of human-computer interfaces.


Author(s):  
Sriraksha Nayak ◽  
Chandrakala C B

According to the World Health Organization estimation, globally the number of people with some visual impairment is estimated to be 285 million, of whom 39 million are blind.  The inability to use features such as sending and reading of email, schedule management, pathfinding or outdoor navigation, and reading SMS is a disadvantage for blind people in many professional and educational situations. Speech or text analysis can help improve support for visually-impaired people. Users can speak a command to perform a task. The spoken command will be interpreted by the Speech Recognition Engine (SRE) and can be converted into text or perform suitable actions. In this paper, an application that allows schedule management, emailing, and SMS reading completely based on voice command is proposed, implemented, and validated. The System hopes to provide blind people to simply speak the desired functionality and be guided thereby the system’s audio instructions. The proposed and designed app is implemented to support three languages which are English, Hindi, and Kannada.


Author(s):  
Yoshiharu Soeta ◽  
Ayaka Ariki

Birdsong is used to communicate the position of stairwells to visually impaired people in train stations in Japan. However, more than 40% of visually impaired people reported that such sounds were difficult to identify. Train companies seek to present the sounds at a sound pressure level that is loud enough to be detected, but not so loud as to be annoying. Therefore, salient birdsongs with relatively low sound pressure levels are required. In the current study, we examined the salience of different types of birdsong and insect song, and determined the dominant physical parameters related to salience. We considered insect songs because both birdsongs and insect songs have been found to have positive effects on soundscapes. We evaluated subjective saliences of birdsongs and insect songs using paired comparison methods, and examined the relationships between subjective salience and physical parameters. In total, 62 participants evaluated 18 types of bird songs and 16 types of insect sounds. The results indicated that the following features significantly influenced subjective salience: the maximum peak amplitude of the autocorrelation function, which signifies pitch strength; the interaural cross-correlation coefficient, which signifies apparent source width; the amplitude fluctuation component; and spectral content, such as flux and skewness.


Author(s):  
Ali Hojjat

Indoor navigation systems must deal with absence of GPS signals, since they are only available in outdoor environments. Therefore, indoor systems have to rely upon other techniques for positioning users. Recently various indoor navigation systems have been designed and developed to help visually impaired people. In this paper an overview of some existing indoor navigation systems for visually impaired people are presented and they are compared from different perspectives. The evaluated techniques are ultrasonic systems, RFID-based solutions, computer vision aided navigation systems, ans smartphone-based applications.


Sign in / Sign up

Export Citation Format

Share Document