scholarly journals Intelligent Assistive System for Visually Disabled Persons

2019 ◽  
Vol 8 (4) ◽  
pp. 1436-1440

There is increasin demand for smart widgets which make people more comfortable. Though many research works have done on current existing devices/systems for visually impaired people are not providing facilities them enough. The imperceptible people read Braille scripted books only, so here developing a new device that will assist the visually impaired people and also providing desired language reading facility. This smart assistive device will help visually impaired people gain increased independence and freedom in society. This device has an obstacle detection sensor to intimate the obstacles to the visually impaired person and a camera connected to Raspberry pi to convert image to text using Optical Character Recognition (OCR). The read data is converted to speech using text to speech synthesizer. This will useful for visually impaired people for surviving in outdoor environment as well as reading books which are in normal script. The read data can be stored in database for further reading and it can be retrieve by giving a command.

Author(s):  
Puru Malhotra and Vinay Kumar Saini

he paper is aimed at the design of a mobility assistive device to help the visually impaired. The traditional use of a walking stick proposes its own drawbacks and limitations. Our research is motivated by the inability of the visually impaired people to ambulate and we have made an attempt to restore their independence and reduce the trouble of carrying a stick around. We offer a hands-free wearable glass which finds it utility in real-time navigation. The design of the smart glasses includes the integration of various sensors with raspberry pi. The paper presents a detailed account of the various components and the structural design of the glasses. The novelty of our work lies in providing a complete pipeline for analysis of surroundings in real-time and hence a better solution for navigating during the day to day activities using audio instructions as output.


Author(s):  
Mrs. Ritika Dhabliya

Autonomous travel is a notable test for visually impaired people and furthermore the expanding accessibility of cost proficiency, superior and versatile advanced imaging gadgets has made a gigantic open door for enhancing conventional checking for record picture securing. We propose a camera based visual help system utilizing raspberry pi for content perusing, movement of items and the sentiments of outwardly hindered people face various challenges to play out their everyday errand. They are absolutely or halfway subject to somebody for help. Their issues have made them to lose their would like to live in this contending society. They look for help from others to control them entire day. This paper expects to make the outwardly debilitated individual completely autonomous in all perspectives. The proposed framework depends on a virtual eye, which conveys to the outside encompassing through a camera. The camera goes about as a consistent wellspring of data to the framework. The information is gotten through the camera. They got signals from the information gadgets are dissected utilizing picture handling in LABVIEW and it reacts to the outwardly debilitated individual through discourse preparing units. The handled data about environmental factors will be educated through the speaker (yield unit) by which outwardly weakened individuals can move and make their work effectively all alone. Also the outwardly weakened individual can naturally control a portion of the home apparatuses like fan utilizing remote correspondence framework.


Author(s):  
Sushmitha M

Communication is the basic requirement for humans to connect and it requires text and speech but visually impaired people cannot able to perform this. This project helps them to read the image. This project is an automatic document reader for visually impaired people, developed on the Raspberry Pi processor board. It controls the peripherals like a camera, a speaker which acts as an interface between the system and the user. Here, we use a raspberry pi camera which is used to capture the image and scan the image using Image Magick software. Then the output of the scanned image is given to OCR(optical character recognition) software to convert the image to text. It converts the typed or printed text to the machine code. Then we use Text to Speech (TTS), which is used to convert speech to text. The experimental result is very helpful to blind people as there was much analysis of the different objects.


This paper deals with the work on creating a robot assisted navigation system for the visually impaired people. The ability to see is an important gift that helps human beings to live their everyday lives, but not all people are blessed with this ability. The normal blind navigation devices deals with on-body devices that the user must carry, but this paper is talking about is a small rover that can be held by the user using a hanlde. The device will contain a raspberry pi module for taking data from a camera on-board to see the environment and make object and obstacle detection. The 4-WD rover will be controlled by an Arduino device which will be connected to the mobile of the user via a Bluetooth module. The user just has to say the name of the location he needs to go to by opening an application, the rest of the routing and transport will be done by the raspberry pi and Arduino.


Author(s):  
Tejal Adep ◽  
Rutuja Nikam ◽  
Sayali Wanewe ◽  
Dr. Ketaki B. Naik

Blind people face the problem in daily life. They can't even walk without any aid. Many times they rely on others for help. Several technologies for the assistance of visually impaired people have been developed. Among the various technologies being utilized to assist the blind, Computer Vision-based solutions are emerging as one of the most promising options due to their affordability and accessibility. This paper proposes a system for visually impaired people. The proposed system aims to create a wearable visual aid for visually impaired people in which speech commands are accepted by the user. Its functionality addresses the identification of objects and signboards. This will help the visually impaired person to manage day-to-day activities and navigate through his/her surroundings. Raspberry Pi is used to implement artificial vision using python language on the Open CV platform.


2015 ◽  
Vol 5 (3) ◽  
pp. 801-804
Author(s):  
M. Abdul-Niby ◽  
M. Alameen ◽  
O. Irscheid ◽  
M. Baidoun ◽  
H. Mourtada

In this paper, we present a low cost hands-free detection and avoidance system designed to provide mobility assistance for visually impaired people. An ultrasonic sensor is attached to the jacket of the user and detects the obstacles in front. The information obtained is transferred to the user through audio messages and also by a vibration. The range of the detection is user-defined. A text-to-speech module is employed for the voice signal. The proposed obstacle avoidance device is cost effective, easy to use and easily upgraded.


Electronics ◽  
2019 ◽  
Vol 8 (6) ◽  
pp. 697 ◽  
Author(s):  
Jinqiang Bai ◽  
Zhaoxiang Liu ◽  
Yimin Lin ◽  
Ye Li ◽  
Shiguo Lian ◽  
...  

Assistive devices for visually impaired people (VIP) which support daily traveling and improve social inclusion are developing fast. Most of them try to solve the problem of navigation or obstacle avoidance, and other works focus on helping VIP to recognize their surrounding objects. However, very few of them couple both capabilities (i.e., navigation and recognition). Aiming at the above needs, this paper presents a wearable assistive device that allows VIP to (i) navigate safely and quickly in unfamiliar environment, and (ii) to recognize the objects in both indoor and outdoor environments. The device consists of a consumer Red, Green, Blue and Depth (RGB-D) camera and an Inertial Measurement Unit (IMU), which are mounted on a pair of eyeglasses, and a smartphone. The device leverages the ground height continuity among adjacent image frames to segment the ground accurately and rapidly, and then search the moving direction according to the ground. A lightweight Convolutional Neural Network (CNN)-based object recognition system is developed and deployed on the smartphone to increase the perception ability of VIP and promote the navigation system. It can provide the semantic information of surroundings, such as the categories, locations, and orientations of objects. Human–machine interaction is performed through audio module (a beeping sound for obstacle alert, speech recognition for understanding the user commands, and speech synthesis for expressing semantic information of surroundings). We evaluated the performance of the proposed system through many experiments conducted in both indoor and outdoor scenarios, demonstrating the efficiency and safety of the proposed assistive system.


Sign in / Sign up

Export Citation Format

Share Document