scholarly journals Electronic Guidance Cane for Users Having Partial Vision Loss Disability

2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Asad Khan ◽  
Muhammad Awais Ashraf ◽  
Muhammad Awais Javeed ◽  
Muhammad Shahzad Sarfraz ◽  
Asad Ullah ◽  
...  

Vision is, no doubt, one of the most important and precious gifts to humans; however, there exists a fraction of visually impaired ones who cannot see properly. These visually impaired disabled people face many challenges in their lives—like performing routine activities, e.g., shopping and walking. Additionally, they also need to travel to known and unknown places for different necessities, and hence, they require an attendant. Most of the time, affording an attendant is not easier and inexpensive, especially when almost 2.5% of the population of Pakistan is visually impaired. There exist some ways of helping these physically impaired people, for example, devices with a navigation system with speech output; however, these are either less accurate, costly, or heavier. Additionally, none of them have shown perfect results in both indoor and outdoor activities. Additionally, the problems become even more severe when the subject/the people are partially deaf as well. In this paper, we present a proof of concept of an embedded prototype which not only navigates but also detects the hurdles and gives alerts—using speech alarm output and/or vibration for the partially deaf—along the way. The designed embedded system includes a cane, a microcontroller, Global System for Mobile Communication (GSM), Global Positioning System (GPS) module, Arduino, a speech output module speaker, Light-Dependent Resistor (LDR), and ultrasonic sensors for hurdle detection with voice and vibrational feedback. Using our developed system, physically impaired people can reach their destination safely and independently.

Author(s):  
Miguel Angel Brand Narvaez ◽  
Miguel A. Mora Gómez ◽  
Brayan A. Tabares Jaramillo. ◽  
Alejandro A. Osorio Ospina ◽  
Juan David Hurtado Arrechea

The chapter focuses on the development of tools containing the basics of Braille and methods that help the people to manage this type of language to enhance the strength of collective thinking in educational statements, taking into account key background; this is why the development of tools containing the basic aspects of the Braille system and the necessary methods to enable the subject to master this type of language, enhancing the strength of collective thinking at the educational level, are so important. However, one of the main challenges in learning Braille is to engage the individual in literacy processes, which is why the creation of learning cards is proposed, to enable people to learn Braille.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3061
Author(s):  
Alice Lo Valvo ◽  
Daniele Croce ◽  
Domenico Garlisi ◽  
Fabrizio Giuliano ◽  
Laura Giarré ◽  
...  

In recent years, we have assisted with an impressive advance in augmented reality systems and computer vision algorithms, based on image processing and artificial intelligence. Thanks to these technologies, mainstream smartphones are able to estimate their own motion in 3D space with high accuracy. In this paper, we exploit such technologies to support the autonomous mobility of people with visual disabilities, identifying pre-defined virtual paths and providing context information, reducing the distance between the digital and real worlds. In particular, we present ARIANNA+, an extension of ARIANNA, a system explicitly designed for visually impaired people for indoor and outdoor localization and navigation. While ARIANNA is based on the assumption that landmarks, such as QR codes, and physical paths (composed of colored tapes, painted lines, or tactile pavings) are deployed in the environment and recognized by the camera of a common smartphone, ARIANNA+ eliminates the need for any physical support thanks to the ARKit library, which we exploit to build a completely virtual path. Moreover, ARIANNA+ adds the possibility for the users to have enhanced interactions with the surrounding environment, through convolutional neural networks (CNNs) trained to recognize objects or buildings and enabling the possibility of accessing contents associated with them. By using a common smartphone as a mediation instrument with the environment, ARIANNA+ leverages augmented reality and machine learning for enhancing physical accessibility. The proposed system allows visually impaired people to easily navigate in indoor and outdoor scenarios simply by loading a previously recorded virtual path and providing automatic guidance along the route, through haptic, speech, and sound feedback.


2021 ◽  
Vol 2089 (1) ◽  
pp. 012056
Author(s):  
K.A. Sunitha ◽  
Ganti Sri Giri Sai Suraj ◽  
G Atchyut Sriram ◽  
N Savitha Sai

Abstract The proposed robot aims to serve as a personal assistant for visually impaired people in obstacle avoidance, in identifying the person (known or unknown) with whom they are interacting with and in navigating. The robot has a special feature in truly locating the subject’s location using GPS. The novel feature of this robot is to identify people with whom the subject interacts. Facial detection and identification in real-time has been a challenge and achieved with accurate image processing using viola jones and SURF algorithms. An obstacle avoidance design has been implanted in the system with many sensors to guide in the correct path. Hence, the robot is a fusion of providing the best of the comfort and safety with minimal cost.


Electronics ◽  
2019 ◽  
Vol 8 (6) ◽  
pp. 697 ◽  
Author(s):  
Jinqiang Bai ◽  
Zhaoxiang Liu ◽  
Yimin Lin ◽  
Ye Li ◽  
Shiguo Lian ◽  
...  

Assistive devices for visually impaired people (VIP) which support daily traveling and improve social inclusion are developing fast. Most of them try to solve the problem of navigation or obstacle avoidance, and other works focus on helping VIP to recognize their surrounding objects. However, very few of them couple both capabilities (i.e., navigation and recognition). Aiming at the above needs, this paper presents a wearable assistive device that allows VIP to (i) navigate safely and quickly in unfamiliar environment, and (ii) to recognize the objects in both indoor and outdoor environments. The device consists of a consumer Red, Green, Blue and Depth (RGB-D) camera and an Inertial Measurement Unit (IMU), which are mounted on a pair of eyeglasses, and a smartphone. The device leverages the ground height continuity among adjacent image frames to segment the ground accurately and rapidly, and then search the moving direction according to the ground. A lightweight Convolutional Neural Network (CNN)-based object recognition system is developed and deployed on the smartphone to increase the perception ability of VIP and promote the navigation system. It can provide the semantic information of surroundings, such as the categories, locations, and orientations of objects. Human–machine interaction is performed through audio module (a beeping sound for obstacle alert, speech recognition for understanding the user commands, and speech synthesis for expressing semantic information of surroundings). We evaluated the performance of the proposed system through many experiments conducted in both indoor and outdoor scenarios, demonstrating the efficiency and safety of the proposed assistive system.


2016 ◽  
Vol 2 (1) ◽  
pp. 727-730
Author(s):  
Nora Loepthien ◽  
Tanja Jehnichen ◽  
Josephine Hauser ◽  
Benjamin Schullcke ◽  
Knut Möller

AbstractThe aim of the project is the development of an aid for blind or visually impaired people, considering economic aspects as well as easy adaptability to various daily situations. Distance sensors were attached to a walking frame (rollator) to detect the distance to obstacles. The information from the sensors is transmitted to the user via tactile feedback. This is realized with a number of vibration motors which were located at the upper belly area of the subject. To test the functionality of the aid to the blind, a testing track with obstacles has been passed through by a number of volunteers. While passing the track five times the needed time to pass through, as well as the number of collisions, were noticed. The results showed a decline in the average time needed to pass though the testing track. This indicates a learning process of the operator to interpret the signals given by the tactile feedback.


This research aims to create an assistive device for the people who are suffering from vision loss or impairment. The device is designed for blind people to overcome the daily challenges they face which may be perceived to be trivial to normal people. The device is created by using advance computer science technologies such as deep learning, computer vision and internet of things. The device created would be able to detect and classify daily objects and give a voice feedback to the user who is handicapped with blindness.


One of the mostly used forms of communication among the people is Email. Lot of confidential and urgent information is exchanged over emails in today’s time. There are about 253 million visually impaired people worldwide. These visually impaired people are facing a problem of communication. Since, technology is growing day by day these types to visually challenged people feel that they are more challenged. So authors proposed a Voice based Email System using AI that will make email system very easily accessible to visually challenged people and also help society. Accessibility is the most important feature that is considered while developing this system. Any system is called accessible only if both able and disable people can use it easily


Author(s):  
Shin’ichiro Uno ◽  
Yasuo Suzuki ◽  
Takashi Watanabe ◽  
Miku Matsumoto ◽  
Yan Wang

We developed software called SIPReS, which describes two-dimensional images with sound. With this system, visually-impaired people can tell the location of a certain point in an image just by hearing notes of frequency each assigned according to the brightness of the point a user touches on. It can run on Android smartphones and tablets. We conducted a small-scale experiment to see if a visually-impaired person can recognize images with SIPReS. In the experiment, the subject successfully recognized if there is an object or not. He also recognized the location information. The experiment suggests this application’s potential as image recognition software.


2021 ◽  
Vol 11 (21) ◽  
pp. 10026
Author(s):  
I-Hsuan Hsieh ◽  
Hsiao-Chu Cheng ◽  
Hao-Hsiang Ke ◽  
Hsiang-Chieh Chen ◽  
Wen-June Wang

In this study, we propose an assistive system for helping visually impaired people walk outdoors. This assistive system contains an embedded system—Jetson AGX Xavier (manufacture by Nvidia in Santa Clara, CA, USA) and a binocular depth camera—ZED 2 (manufacture by Stereolabs in San Francisco, CA, USA). Based on the CNN neural network FAST-SCNN and the depth map obtained by the ZED 2, the image of the environment in front of the visually impaired user is split into seven equal divisions. A walkability confidence value for each division is computed, and a voice prompt is played to guide the user toward the most appropriate direction such that the visually impaired user can navigate a safe path on the sidewalk, avoid any obstacles, or walk on the crosswalk safely. Furthermore, the obstacle in front of the user is identified by the network YOLOv5s proposed by Jocher, G. et al. Finally, we provided the proposed assistive system to a visually impaired person and experimented around an MRT station in Taiwan. The visually impaired person indicated that the proposed system indeed helped him feel safer when walking outdoors. The experiment also verified that the system could effectively guide the visually impaired person walking safely on the sidewalk and crosswalks.


Sign in / Sign up

Export Citation Format

Share Document