Multimodal and alternative perception for the visually impaired: a survey

2016 ◽  
Vol 10 (1) ◽  
pp. 11-26 ◽  
Author(s):  
Wai Lun Khoo ◽  
Zhigang Zhu

Purpose – The purpose of this paper is to provide an overview of navigational assistive technologies with various sensor modalities and alternative perception approaches for visually impaired people. It also examines the input and output of each technology, and provides a comparison between systems. Design/methodology/approach – The contributing authors along with their students thoroughly read and reviewed the referenced papers while under the guidance of domain experts and users evaluating each paper/technology based on a set of metrics adapted from universal and system design. Findings – After analyzing 13 multimodal assistive technologies, the authors found that the most popular sensors are optical, infrared, and ultrasonic. Similarly, the most popular actuators are audio and haptic. Furthermore, most systems use a combination of these sensors and actuators. Some systems are niche, while others strive to be universal. Research limitations/implications – This paper serves as a starting point for further research in benchmarking multimodal assistive technologies for the visually impaired and to eventually cultivate better assistive technologies for all. Social implications – Based on 2012 World Health Organization, there are 39 million blind people. This paper will have an insight of what kind of assistive technologies are available to the visually impaired people, whether in market or research lab. Originality/value – This paper provides a comparison across diverse visual assistive technologies. This is valuable to those who are developing assistive technologies and want to be aware of what is available as well their pros and cons, and the study of human-computer interfaces.

Sensors ◽  
2019 ◽  
Vol 19 (24) ◽  
pp. 5343 ◽  
Author(s):  
Yusuke Kajiwara ◽  
Haruhiko Kimura

It is difficult for visually impaired people to move indoors and outdoors. In 2018, world health organization (WHO) reported that there were about 253 million people around the world who were moderately visually impaired in distance vision. A navigation system that combines positioning and obstacle detection has been actively researched and developed. However, when these obstacle detection methods are used in high-traffic passages, since many pedestrians cause an occlusion problem that obstructs the shape and color of obstacles, these obstacle detection methods significantly decrease in accuracy. To solve this problem, we developed an application “Follow me!”. The application recommends a safe route by machine learning the gait and walking route of many pedestrians obtained from the monocular camera images of a smartphone. As a result of the experiment, pedestrians walking in the same direction as visually impaired people, oncoming pedestrians, and steps were identified with an average accuracy of 0.92 based on the gait and walking route of pedestrians acquired from monocular camera images. Furthermore, the results of the recommended safe route based on the identification results showed that the visually impaired people were guided to a safe route with 100% accuracy. In addition, visually impaired people avoided obstacles that had to be detoured during construction and signage by walking along the recommended route.


Technologies ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. 37 ◽  
Author(s):  
Mohamed Dhiaeddine Messaoudi ◽  
Bob-Antoine J. Menelas ◽  
Hamid Mcheick

According to the statistics provided by the World Health Organization, the number of people suffering from visual impairment is approximately 1.3 billion. The number of blind and visually impaired people is expected to increase over the coming years, and it is estimated to triple by the end of 2050 which is quite alarming. Keeping the needs and problems faced by the visually impaired people in mind, we have come up with a technological solution that is a “Smart Cane device” that can help people having sight impairment to navigate with ease and to avoid the risk factors surrounding them. Currently, the three main options available for blind people are using a white cane, technological tools and guide dogs. The solution that has been proposed in this article is using various technological tools to come up with a smart solution to the problem to facilitate the users’ life. The designed system mainly aims to facilitate indoor navigation using cloud computing and Internet of things (IoT) wireless scanners. The goal of developing the Smart Cane can be achieved by integrating various hardware and software systems. The proposed solution of a Smart Cane device aims to provide smooth displacement for the visually impaired people from one place to another and to provide them with a tool that can help them to communicate with their surrounding environment.


Author(s):  
Ramiz Salama ◽  
Ahmad Ayoub

Nowadays, blind or impaired people are facing a lot of problems in their daily life since it is not easy for them to move, which is very dangerous. There are about 37 million visually impaired people across the globe according to the World Health Organization. People with these problems mostly depend on others, for example, a friend, or their trained dog while movıng outside. Thus, we were motivated to develop a smart stick to solve this problem. The smart stick, integrated with an ultrasonic sensor, buzzer and vibrator, can detect obstacles in the path of the blind people. The buzzer and vibration motor are activated when any obstacle is detected to alert the blind person. This work proposes a low-cost ultrasonic smart blind stick for blind people so that they can move from one place to another in an easy, safe and independent manner. The system was designed and programmed using C language. Keywords: Arduino Uno, Arduino IDE, ultrasonic sensor, buzzer, motor.


Author(s):  
Sriraksha Nayak ◽  
Chandrakala C B

According to the World Health Organization estimation, globally the number of people with some visual impairment is estimated to be 285 million, of whom 39 million are blind.  The inability to use features such as sending and reading of email, schedule management, pathfinding or outdoor navigation, and reading SMS is a disadvantage for blind people in many professional and educational situations. Speech or text analysis can help improve support for visually-impaired people. Users can speak a command to perform a task. The spoken command will be interpreted by the Speech Recognition Engine (SRE) and can be converted into text or perform suitable actions. In this paper, an application that allows schedule management, emailing, and SMS reading completely based on voice command is proposed, implemented, and validated. The System hopes to provide blind people to simply speak the desired functionality and be guided thereby the system’s audio instructions. The proposed and designed app is implemented to support three languages which are English, Hindi, and Kannada.


Author(s):  
Evania Joycelin Anthony ◽  
Regina Anastasia Kusnadi

Globally around the world in 2010, the number of people of all ages visually impaired is estimated to be 285 million, of whom 39 million are blind according to the study of World Health Organization (Global Data on Visual Impairments, 2010). Visual impairment has a significant impact on individuals’ quality of life, including their ability to work and to develop personal relationships. Almost half (48 %) of the visually impaired feel “moderately” or “completely” cut off from people and things around them (Hakobyan, Lumsden, O’Sullivan, & Bartlett, 2013). We believe that technology has the potential to enhance individuals’ ability to participate fully in societal activities and to live independently. So, in this paper we focused to presents a comprehensive literature review about different algorithms of computer vision for supporting blind/vision impaired people, different devices used and the supported tasks. From the 13 eligible papers, we found positive effects of the use of computer vision for supporting visually impaired people. These effects included: the detection of obstacles, objects, door and text, traffic lights, sign detections and navigation. But the biggest challenge for developers now is to increase the speed of time and improve its accuracy, and we expect the future will have a complete package or solution where blind or vision impaired people will get all the solution together (i.e., map, indoor-outdoor navigation, object recognition, obstacle recognition, person recognition, human crowd behavior, crowd human counting, study/reading, entertainment etc.) in one software and in hand-held devices like android or any handy devices.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4767
Author(s):  
Karla Miriam Reyes Leiva ◽  
Milagros Jaén-Vargas ◽  
Benito Codina ◽  
José Javier Serrano Olmedo

A diverse array of assistive technologies have been developed to help Visually Impaired People (VIP) face many basic daily autonomy challenges. Inertial measurement unit sensors, on the other hand, have been used for navigation, guidance, and localization but especially for full body motion tracking due to their low cost and miniaturization, which have allowed the estimation of kinematic parameters and biomechanical analysis for different field of applications. The aim of this work was to present a comprehensive approach of assistive technologies for VIP that include inertial sensors as input, producing results on the comprehension of technical characteristics of the inertial sensors, the methodologies applied, and their specific role in each developed system. The results show that there are just a few inertial sensor-based systems. However, these sensors provide essential information when combined with optical sensors and radio signals for navigation and special application fields. The discussion includes new avenues of research, missing elements, and usability analysis, since a limitation evidenced in the selected articles is the lack of user-centered designs. Finally, regarding application fields, it has been highlighted that a gap exists in the literature regarding aids for rehabilitation and biomechanical analysis of VIP. Most of the findings are focused on navigation and obstacle detection, and this should be considered for future applications.


2014 ◽  
Vol 32 (2) ◽  
pp. 368-379 ◽  
Author(s):  
Stefanus Andreas Kleynhans ◽  
Ina Fourie

Purpose – The paper addresses the importance of clarifying terminology such as visually impaired and related terms before embarking on accessibility studies of electronic information resources in library contexts. Apart from briefly defining accessibility, the paper attempts to address the lack of in-depth definitions of terms such as visually impaired, blind, partially sighted, etc. that has been noted in the literature indexed by two major Library and Information Science (LIS) databases. The purpose of this paper is to offer a basis for selecting participants in studies of accessibility of electronic information resources in library contexts and to put discussions of such studies in context. Design/methodology/approach – Clarification of concepts concerning visual impairment following a literature survey based on searching two major databases in LIS. To put the discussion in context accessibility is also briefly defined. Findings – Although visually impaired and a variety of related terms such as blind, partially sighted, visually disabled, etc. are used in the LIS literature, hardly any attempt is made to define these terms in depth. This can be a serious limitation in web and electronic accessibility evaluations and the selection of participants. Practical implications – Clearly distinguishing between categories of visually impaired people and the ability of sight of participants is important when selecting participants for studies on accessibility for visually impaired people, e.g. the accessibility evaluation of web sites, digital libraries and other electronic information resources. Originality/value – The paper can make a contribution to the clarification of terminology essential for the selection of participants in accessibility studies, as well as enriching the literature on accessibility for visually impaired people in the context of LIS.


Author(s):  
Najd Al-Mouh ◽  
Hend S. Al-Khalifa

Purpose – This paper aims to investigate accessibility and usage of mobile smartphones by Arabic-speaking visually impaired people in Saudi Arabia. Design/methodology/approach – In total, 104 participants with visual impairments were interviewed about their use of mobile phones with the following questions: What is the most commonly used mobile phone? What is the popular domain for which they use mobile phones? What are their favorite applications? What accessibility challenges do they usually face while using mobile phones? How often do they use the Internet via mobile phones and what are the reasons behind that? Findings – This research is the first study with such magnitude to investigate smartphone usage by Arabic-speaking visually impaired people. The survey has revealed that Arabic-speaking visually impaired people utilize mobile phones in different ways and strategies. Getting assistance in performing daily tasks and navigating independently are two of the most common uses for mobile phones. Originality/value – Based on the findings, the authors are going to propose some guidelines to developers to improve smartphone accessibility, application design and Internet usage to improve accessibility for visually impaired people.


Sign in / Sign up

Export Citation Format

Share Document