A Vision Module for Visually Impaired People by Using Raspberry PI Platform

Author(s):  
Laviniu Tepelea ◽  
Ioan Buciu ◽  
Cristian Grava ◽  
Ioan Gavrilut ◽  
Alexandru Gacsadi
Author(s):  
Tejal Adep ◽  
Rutuja Nikam ◽  
Sayali Wanewe ◽  
Dr. Ketaki B. Naik

Blind people face the problem in daily life. They can't even walk without any aid. Many times they rely on others for help. Several technologies for the assistance of visually impaired people have been developed. Among the various technologies being utilized to assist the blind, Computer Vision-based solutions are emerging as one of the most promising options due to their affordability and accessibility. This paper proposes a system for visually impaired people. The proposed system aims to create a wearable visual aid for visually impaired people in which speech commands are accepted by the user. Its functionality addresses the identification of objects and signboards. This will help the visually impaired person to manage day-to-day activities and navigate through his/her surroundings. Raspberry Pi is used to implement artificial vision using python language on the Open CV platform.


Author(s):  
Puru Malhotra and Vinay Kumar Saini

he paper is aimed at the design of a mobility assistive device to help the visually impaired. The traditional use of a walking stick proposes its own drawbacks and limitations. Our research is motivated by the inability of the visually impaired people to ambulate and we have made an attempt to restore their independence and reduce the trouble of carrying a stick around. We offer a hands-free wearable glass which finds it utility in real-time navigation. The design of the smart glasses includes the integration of various sensors with raspberry pi. The paper presents a detailed account of the various components and the structural design of the glasses. The novelty of our work lies in providing a complete pipeline for analysis of surroundings in real-time and hence a better solution for navigating during the day to day activities using audio instructions as output.


—Technology is best when it brings people together. Today technology plays a vital role in humanity. Also applied science can make the impossible possible. The proposed project aims to show equality in the safe navigation of visually impaired people just like a normal person. The project aims to help the secure guidance of humans with bad eyesight. This system support the sole in attaining the landing place, leading them across the way and alert them about the barrier that are expected in their path through the vibration and generate simulated speech output through headset. Therefore, this technology hold back them from striking the barrier. It add on value to conventional canes with barrier predicting, preventing human from accident and reducing difficulties in navigation. An ultrasonic sensor is execute to determine the distant of obstacles from the person. It is a Raspberry Pi based platform that is used to alert the person of impending obstacles. Also can create the place for all other components and it has functioning code. Here, a vibration motor is used to warn the person from the collision. Combined with the role of guiding, it also has aid preventing plan in case of emergency. The GPS is included to find the location of person and the location is sent to the person’s family through the notification by means of Blynk app. Accordingly, The project convince the visually impaired people can travel alone without getting fear or accidents at the moment.


The object detection is used in almost every realworld application such as autonomous traversal, visual system, face detection and even more. This paper aims at applying object detection technique to assist visually impaired people. It helps visually impaired people to know about the objects around them to enable them to walk free. A prototype has been implemented on a Raspberry PI 3 using OpenCV libraries, and satisfactory performance is achieved. In this paper, detailed review has been carried out on object detection using region – conventionaal neural network (RCNN) based learning systems for a real-world application. This paper explores the various process of detecting objects using various object detections methods and walks through detection including a deep neural network for SSD implemented using Caffe model.


2019 ◽  
Vol 8 (4) ◽  
pp. 1436-1440

There is increasin demand for smart widgets which make people more comfortable. Though many research works have done on current existing devices/systems for visually impaired people are not providing facilities them enough. The imperceptible people read Braille scripted books only, so here developing a new device that will assist the visually impaired people and also providing desired language reading facility. This smart assistive device will help visually impaired people gain increased independence and freedom in society. This device has an obstacle detection sensor to intimate the obstacles to the visually impaired person and a camera connected to Raspberry pi to convert image to text using Optical Character Recognition (OCR). The read data is converted to speech using text to speech synthesizer. This will useful for visually impaired people for surviving in outdoor environment as well as reading books which are in normal script. The read data can be stored in database for further reading and it can be retrieve by giving a command.


Author(s):  
Mrs. Ritika Dhabliya

Autonomous travel is a notable test for visually impaired people and furthermore the expanding accessibility of cost proficiency, superior and versatile advanced imaging gadgets has made a gigantic open door for enhancing conventional checking for record picture securing. We propose a camera based visual help system utilizing raspberry pi for content perusing, movement of items and the sentiments of outwardly hindered people face various challenges to play out their everyday errand. They are absolutely or halfway subject to somebody for help. Their issues have made them to lose their would like to live in this contending society. They look for help from others to control them entire day. This paper expects to make the outwardly debilitated individual completely autonomous in all perspectives. The proposed framework depends on a virtual eye, which conveys to the outside encompassing through a camera. The camera goes about as a consistent wellspring of data to the framework. The information is gotten through the camera. They got signals from the information gadgets are dissected utilizing picture handling in LABVIEW and it reacts to the outwardly debilitated individual through discourse preparing units. The handled data about environmental factors will be educated through the speaker (yield unit) by which outwardly weakened individuals can move and make their work effectively all alone. Also the outwardly weakened individual can naturally control a portion of the home apparatuses like fan utilizing remote correspondence framework.


Sign in / Sign up

Export Citation Format

Share Document