The people sensor: a mobility aid for the visually impaired

Author(s):  
S. Ram ◽  
J. Sharf
Author(s):  
Shrugal Varde* ◽  
◽  
Dr. M.S. Panse ◽  

This paper introduces a novel travel for blind users that can assist them to detects location of doors in corridors and also give information about location of stairs. The developed system uses camera to capture images in front of the user. Feature extraction algorithm is used to extract key features that distinguish doors and stairs from other structures observed in indoor environments. This information is then conveyed to the user using simple auditory feedback. The mobility aid was validated on 50 visually impaired users. The subjects walked in a controlled test environment. The accuracy of the device to help the user detect doors and stairs was determined. The results obtained were satisfactory and the device has the potential for use in standalone mode for indoor navigations.


2021 ◽  
Vol 10 (1) ◽  
pp. 8-12
Author(s):  
N. Amutha Priya ◽  
S. Sahaya Sathesh Raj ◽  
K. Siva ◽  
R. Vetha Jebarson ◽  
P. Vignesh

Visual impairment burdens the people heavily in today’s fast-moving culture. Although there are several measures for this impairment, many people skip such chances in fear of their economic condition. In view of helping those people, a sophisticated stick is designed using sensors like ultrasonic sensor, IR sensor and Arduino ICs. Electronically designed walking stick helps to trace the locations using GPS module and to avoid collisions through detection of objects at a certain range of distance from the person in all directions. Sensor placed beneath the bottom of the stick enables the identification of pits/hindrances on the ground. The visually impaired person will be alerted with a voice message generated using the voice recorder which records the response of all the sensors operating in different tracks. In this paper, GSM and sensors together have initiated the role of smart walking stick in the life of several human kinds.


Author(s):  
Baswaraju Swathi ◽  
Joshua Dani M ◽  
Sraddha Bhattacharjee ◽  
Zeeshan Yousuf

In today’s growing world where technology advancing into every aspect of our lives, it has changed the way we go about our life. With all this technology in hand, improvements can be made in various ways to help the society. Focusing on the transportation industry, it has seen an exponential technological growth with the introduction of Applications and services which provide the people with an easy option of travel by booking Cabs which will arrive at their doorstep. Some of the leading apps in this category are Ola, Uber and so many more. However, these apps cater only to the common demographic of people/users. India is a country with over 13 million visually impaired individuals out of which the state of Karnataka has 264,170 people according to the Karnataka Census of 2011. With all these apps and technologies being available to us, we can provide a means of easy and safe transport to these impaired individuals. E-Car Savegalu is an attempt at providing these transportation services to the visually impaired in the form of an application which is intuitive and easy use to book cabs. This concept has a significant role to play in the society by helping a good number of visually impaired people in terms of travel.


This research aims to create an assistive device for the people who are suffering from vision loss or impairment. The device is designed for blind people to overcome the daily challenges they face which may be perceived to be trivial to normal people. The device is created by using advance computer science technologies such as deep learning, computer vision and internet of things. The device created would be able to detect and classify daily objects and give a voice feedback to the user who is handicapped with blindness.


2021 ◽  
pp. 1-9
Author(s):  
Abimbola M. Jubril ◽  
Segun J. Samuel

BACKGROUND: This paper considered the development of a wearable electronic mobility aid. METHODS: The developed system is based on the multisensor fusion approach of detection which combined three techniques, namely: a source of laser light, a camera and an ultrasonic sensor. A red line generating laser source is used to project a straight line and this is captured by the camera. The red line is deformed differently on coming in contact with holes or standing obstacles. The pattern of deformation is then extracted for obstacle and pothole recognition. The visibility of laser light is greatly reduced when the scene is extremely illuminated, so this is complemented with edge detection. The edge detection uses edges in the identification of holes and obstacles. This is combined with ultrasonic sensing, so that the presence of obstacles can be differentiated from that of holes. The outcome of detection and the distance of obstacles from the blind are relayed via an audio cue. REDULTS: Its evaluation showed better performance compared to the guide cane. It showed a reduction in collision rate by 83.25% and reduction in falling rate by 84.62%. The device received good acceptability from the users.


One of the mostly used forms of communication among the people is Email. Lot of confidential and urgent information is exchanged over emails in today’s time. There are about 253 million visually impaired people worldwide. These visually impaired people are facing a problem of communication. Since, technology is growing day by day these types to visually challenged people feel that they are more challenged. So authors proposed a Voice based Email System using AI that will make email system very easily accessible to visually challenged people and also help society. Accessibility is the most important feature that is considered while developing this system. Any system is called accessible only if both able and disable people can use it easily


2020 ◽  
Vol 8 (6) ◽  
pp. 2924-2927

Applications of science and technology have made a human life much easier. Vision plays a very important role in one’s life. Disease, accidents or due some other reasons people may loose their vision. Navigation becomes a major problem for the people with complete blindness or partial blindness. This paper aims to provide navigation guidance for visually impaired. Here we have designed a model which provides the instruction for the visionless people to navigate freely. NoIR camera is used to capture the picture around the person and identifies the objects. Using earphones voice output is provided defining the objects. This model includes Raspberry Pi 3 processor which collects the objects in surroundings and converts them into voice message, NoIR camera is used detect the object, power bank provides the power and earphones are used here the output message. TensorFlow API an open source software library used for object detection and classification. Using TensorFlow API multiple objects are obtained in a single frame. eSpeak a Text to Speech synthesizer (TTS) software is used to convert text (detected objects) to speech format. Hence using NoIR camera video which is captured is converted into voice output which provides the guidance for detecting objects. Using COCO model 90 commonly used objects are identified like person, table, book etc.


Sign in / Sign up

Export Citation Format

Share Document