scholarly journals Hardware Design for Smart Walking Stick Supporting Neural Networks

This paper aims to bring out the efficient hardware system design to be used in walking stick by the visually impaired people especially to support the cutting edge software technologies to assist in their mobility. It is designed in such a way that it is convenient to handle and also to perform heavier programs without any degradation in accuracy. Hardware design uses Rasberry pi3 Model B for finding the obstacle and to find the distance of the obstacle. Pi camera is used to capture the video frames and feed each frame for processing. For real time object detection, the proposed system uses neural network to train the images.

Author(s):  
Fereshteh S. Bashiri ◽  
Eric LaRose ◽  
Jonathan C. Badger ◽  
Roshan M. D’Souza ◽  
Zeyun Yu ◽  
...  

2018 ◽  
Vol 7 (3.12) ◽  
pp. 116
Author(s):  
N Vignesh ◽  
Meghachandra Srinivas Reddy.P ◽  
Nirmal Raja.G ◽  
Elamaram E ◽  
B Sudhakar

Eyes play important role in our day to day lives and are perhaps the most valuable gift we have. This world is visible to us because we are blessed with eyesight. But there are some people who lag this ability of visualizing these things. Due to this, they will undergo a lot of troubles o move comfortably in public places. Hence, wearable device should design for such visual impaired people. A smart shoe is wearable system design to provide directional information to visually impaired people. To provide smart and sensible navigation guidance to visually impaired people, the system has great potential especially when integrated with visual processing units. During the operation, the user is supposed to wear the shoes. When sensors will detect any obstacle, user will be informed through Android system being used by the user. The Smart Shoes along with the application on the Android system shall help the user in moving around independently.


Author(s):  
G. Touya ◽  
F. Brisebard ◽  
F. Quinton ◽  
A. Courtial

Abstract. Visually impaired people cannot use classical maps but can learn to use tactile relief maps. These tactile maps are crucial at school to learn geography and history as well as the other students. They are produced manually by professional transcriptors in a very long and costly process. A platform able to generate tactile maps from maps scanned from geography textbooks could be extremely useful to these transcriptors, to fasten their production. As a first step towards such a platform, this paper proposes a method to infer the scale and the content of the map from its image. We used convolutional neural networks trained with a few hundred maps from French geography textbooks, and the results show promising results to infer labels about the content of the map (e.g. ”there are roads, cities and administrative boundaries”), and to infer the extent of the map (e.g. a map of France or of Europe).


2018 ◽  
Author(s):  
Shravan Mohite ◽  
Abhishek Patel ◽  
Milan Patel ◽  
Vaishali Gaikwad (Mohite)

Entropy ◽  
2020 ◽  
Vol 22 (9) ◽  
pp. 941
Author(s):  
Rakesh Chandra Joshi ◽  
Saumya Yadav ◽  
Malay Kishore Dutta ◽  
Carlos M. Travieso-Gonzalez

Visually impaired people face numerous difficulties in their daily life, and technological interventions may assist them to meet these challenges. This paper proposes an artificial intelligence-based fully automatic assistive technology to recognize different objects, and auditory inputs are provided to the user in real time, which gives better understanding to the visually impaired person about their surroundings. A deep-learning model is trained with multiple images of objects that are highly relevant to the visually impaired person. Training images are augmented and manually annotated to bring more robustness to the trained model. In addition to computer vision-based techniques for object recognition, a distance-measuring sensor is integrated to make the device more comprehensive by recognizing obstacles while navigating from one place to another. The auditory information that is conveyed to the user after scene segmentation and obstacle identification is optimized to obtain more information in less time for faster processing of video frames. The average accuracy of this proposed method is 95.19% and 99.69% for object detection and recognition, respectively. The time complexity is low, allowing a user to perceive the surrounding scene in real time.


Sign in / Sign up

Export Citation Format

Share Document