scholarly journals Wearable Travel Aid for Environment Perception and Navigation of Visually Impaired People

Electronics ◽  
2019 ◽  
Vol 8 (6) ◽  
pp. 697 ◽  
Author(s):  
Jinqiang Bai ◽  
Zhaoxiang Liu ◽  
Yimin Lin ◽  
Ye Li ◽  
Shiguo Lian ◽  
...  

Assistive devices for visually impaired people (VIP) which support daily traveling and improve social inclusion are developing fast. Most of them try to solve the problem of navigation or obstacle avoidance, and other works focus on helping VIP to recognize their surrounding objects. However, very few of them couple both capabilities (i.e., navigation and recognition). Aiming at the above needs, this paper presents a wearable assistive device that allows VIP to (i) navigate safely and quickly in unfamiliar environment, and (ii) to recognize the objects in both indoor and outdoor environments. The device consists of a consumer Red, Green, Blue and Depth (RGB-D) camera and an Inertial Measurement Unit (IMU), which are mounted on a pair of eyeglasses, and a smartphone. The device leverages the ground height continuity among adjacent image frames to segment the ground accurately and rapidly, and then search the moving direction according to the ground. A lightweight Convolutional Neural Network (CNN)-based object recognition system is developed and deployed on the smartphone to increase the perception ability of VIP and promote the navigation system. It can provide the semantic information of surroundings, such as the categories, locations, and orientations of objects. Human–machine interaction is performed through audio module (a beeping sound for obstacle alert, speech recognition for understanding the user commands, and speech synthesis for expressing semantic information of surroundings). We evaluated the performance of the proposed system through many experiments conducted in both indoor and outdoor scenarios, demonstrating the efficiency and safety of the proposed assistive system.

Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4767
Author(s):  
Karla Miriam Reyes Leiva ◽  
Milagros Jaén-Vargas ◽  
Benito Codina ◽  
José Javier Serrano Olmedo

A diverse array of assistive technologies have been developed to help Visually Impaired People (VIP) face many basic daily autonomy challenges. Inertial measurement unit sensors, on the other hand, have been used for navigation, guidance, and localization but especially for full body motion tracking due to their low cost and miniaturization, which have allowed the estimation of kinematic parameters and biomechanical analysis for different field of applications. The aim of this work was to present a comprehensive approach of assistive technologies for VIP that include inertial sensors as input, producing results on the comprehension of technical characteristics of the inertial sensors, the methodologies applied, and their specific role in each developed system. The results show that there are just a few inertial sensor-based systems. However, these sensors provide essential information when combined with optical sensors and radio signals for navigation and special application fields. The discussion includes new avenues of research, missing elements, and usability analysis, since a limitation evidenced in the selected articles is the lack of user-centered designs. Finally, regarding application fields, it has been highlighted that a gap exists in the literature regarding aids for rehabilitation and biomechanical analysis of VIP. Most of the findings are focused on navigation and obstacle detection, and this should be considered for future applications.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3061
Author(s):  
Alice Lo Valvo ◽  
Daniele Croce ◽  
Domenico Garlisi ◽  
Fabrizio Giuliano ◽  
Laura Giarré ◽  
...  

In recent years, we have assisted with an impressive advance in augmented reality systems and computer vision algorithms, based on image processing and artificial intelligence. Thanks to these technologies, mainstream smartphones are able to estimate their own motion in 3D space with high accuracy. In this paper, we exploit such technologies to support the autonomous mobility of people with visual disabilities, identifying pre-defined virtual paths and providing context information, reducing the distance between the digital and real worlds. In particular, we present ARIANNA+, an extension of ARIANNA, a system explicitly designed for visually impaired people for indoor and outdoor localization and navigation. While ARIANNA is based on the assumption that landmarks, such as QR codes, and physical paths (composed of colored tapes, painted lines, or tactile pavings) are deployed in the environment and recognized by the camera of a common smartphone, ARIANNA+ eliminates the need for any physical support thanks to the ARKit library, which we exploit to build a completely virtual path. Moreover, ARIANNA+ adds the possibility for the users to have enhanced interactions with the surrounding environment, through convolutional neural networks (CNNs) trained to recognize objects or buildings and enabling the possibility of accessing contents associated with them. By using a common smartphone as a mediation instrument with the environment, ARIANNA+ leverages augmented reality and machine learning for enhancing physical accessibility. The proposed system allows visually impaired people to easily navigate in indoor and outdoor scenarios simply by loading a previously recorded virtual path and providing automatic guidance along the route, through haptic, speech, and sound feedback.


The evolution of the technology takes the education to next level, where it makes the learning process more interesting and attractive. The Virtual Reality plays an important role in this evolution. The main aim of this work is to enhance the learning ability in students through virtual environment by developing an education based game. In this work, the virtual reality device-Wii mote has been used for the learning process, and also for answering the questions in the different levels of game. The learning process also involves the speech synthesis. This helps the visually impaired people to learn without others help and it also motivates even the average students to participate more actively in learning process. The game has been further divided as easy, medium and difficult levels. So the learning ability of each student can be easily tested and further steps can be taken in order to motivate them, and to optimize their learning skill. Thus, this work motivates the students for learning and to exalt their learning ability.


Author(s):  
Karine Lan HingTing ◽  
Ines Di Loreto

This article describes the participatory design (PD) approach adopted in systematically involving visually impaired people in the design of an art exhibition adapted to their needs. This exhibition will be the outcome of a publicly-funded research project aimed at making visual art accessible to everyone: specifically (but not exclusively) to visually impaired people, in an objective of social inclusion. This article presents the research done to elicit, capture, and analyse the needs of visually impaired people who are the active actors of this research. The aim of the article is to trigger discussion about both the necessity and difficulty of elaborating relevant techniques in this empirical and open-ended approach, and what is meant by participation in this particular setting.


Author(s):  
Puru Malhotra and Vinay Kumar Saini

he paper is aimed at the design of a mobility assistive device to help the visually impaired. The traditional use of a walking stick proposes its own drawbacks and limitations. Our research is motivated by the inability of the visually impaired people to ambulate and we have made an attempt to restore their independence and reduce the trouble of carrying a stick around. We offer a hands-free wearable glass which finds it utility in real-time navigation. The design of the smart glasses includes the integration of various sensors with raspberry pi. The paper presents a detailed account of the various components and the structural design of the glasses. The novelty of our work lies in providing a complete pipeline for analysis of surroundings in real-time and hence a better solution for navigating during the day to day activities using audio instructions as output.


Author(s):  
Syed Tehzeeb Alam ◽  
Sonal Shrivastava ◽  
Syed Tanzim Alam ◽  
R. Sasikala ◽  

2019 ◽  
Vol 8 (4) ◽  
pp. 1436-1440

There is increasin demand for smart widgets which make people more comfortable. Though many research works have done on current existing devices/systems for visually impaired people are not providing facilities them enough. The imperceptible people read Braille scripted books only, so here developing a new device that will assist the visually impaired people and also providing desired language reading facility. This smart assistive device will help visually impaired people gain increased independence and freedom in society. This device has an obstacle detection sensor to intimate the obstacles to the visually impaired person and a camera connected to Raspberry pi to convert image to text using Optical Character Recognition (OCR). The read data is converted to speech using text to speech synthesizer. This will useful for visually impaired people for surviving in outdoor environment as well as reading books which are in normal script. The read data can be stored in database for further reading and it can be retrieve by giving a command.


Sign in / Sign up

Export Citation Format

Share Document