scholarly journals Indoor Low Cost Assistive Device using 2D SLAM Based on LiDAR for Visually Impaired People

2019 ◽  
Vol 15 (2) ◽  
pp. 115-121
Author(s):  
Heba Hakim ◽  
Ali Marhoon

Many assistive devices have been developed for visually impaired (VI) person in recent years which solve the problems that face VI person in his/her daily moving. Most of researches try to solve the obstacle avoidance or navigation problem, and others focus on assisting VI person to recognize the objects in his/her surrounding environment. However, a few of them integrate both navigation and recognition capabilities in their system. According to above needs, an assistive device is presented in this paper that achieves both capabilities to aid the VI person to (1) navigate safely from his/her current location (pose) to a desired destination in unknown environment, and (2) recognize his/her surrounding objects. The proposed system consists of the low cost sensors Neato XV-11 LiDAR, ultrasonic sensor, Raspberry pi camera (CameraPi), which are hold on a white cane. Hector SLAM based on 2D LiDAR is used to construct a 2D-map of unfamiliar environment. While A* path planning algorithm generates an optimal path on the given 2D hector map. Moreover, the temporary obstacles in front of VI person are detected by an ultrasonic sensor. The recognition system based on Convolution Neural Networks (CNN) technique is implemented in this work to predict object class besides enhance the navigation system. The interaction between the VI person and an assistive system is done by audio module (speech recognition and speech synthesis). The proposed system performance has been evaluated on various real-time experiments conducted in indoor scenarios, showing the efficiency of the proposed system.

2022 ◽  
pp. 240-271
Author(s):  
Dmytro Zubov

Smart assistive devices for blind and visually impaired (B&VI) people are of high interest today since wearable IoT hardware became available for a wide range of users. In the first project, the Raspberry Pi 3 B board measures a distance to the nearest obstacle via ultrasonic sensor HC-SR04 and recognizes human faces by Pi camera, OpenCV library, and Adam Geitgey module. Objects are found by Bluetooth devices of classes 1-3 and iBeacons. Intelligent eHealth agents cooperate with one another in a smart city mesh network via MQTT and BLE protocols. In the second project, B&VIs are supported to play golf. Golf flagsticks have sound marking devices with a buzzer, NodeMcu Lua ESP8266 ESP-12 WiFi board, and WiFi remote control. In the third project, an assistive device supports the orientation of B&VIs by measuring the distance to obstacles via Arduino Uno and HC-SR04. The distance is pronounced through headphones. In the fourth project, the soft-/hardware complex uses Raspberry Pi 3 B and Bytereal iBeacon fingerprinting to uniquely identify the B&VI location at industrial facilities.


Electronics ◽  
2019 ◽  
Vol 8 (6) ◽  
pp. 697 ◽  
Author(s):  
Jinqiang Bai ◽  
Zhaoxiang Liu ◽  
Yimin Lin ◽  
Ye Li ◽  
Shiguo Lian ◽  
...  

Assistive devices for visually impaired people (VIP) which support daily traveling and improve social inclusion are developing fast. Most of them try to solve the problem of navigation or obstacle avoidance, and other works focus on helping VIP to recognize their surrounding objects. However, very few of them couple both capabilities (i.e., navigation and recognition). Aiming at the above needs, this paper presents a wearable assistive device that allows VIP to (i) navigate safely and quickly in unfamiliar environment, and (ii) to recognize the objects in both indoor and outdoor environments. The device consists of a consumer Red, Green, Blue and Depth (RGB-D) camera and an Inertial Measurement Unit (IMU), which are mounted on a pair of eyeglasses, and a smartphone. The device leverages the ground height continuity among adjacent image frames to segment the ground accurately and rapidly, and then search the moving direction according to the ground. A lightweight Convolutional Neural Network (CNN)-based object recognition system is developed and deployed on the smartphone to increase the perception ability of VIP and promote the navigation system. It can provide the semantic information of surroundings, such as the categories, locations, and orientations of objects. Human–machine interaction is performed through audio module (a beeping sound for obstacle alert, speech recognition for understanding the user commands, and speech synthesis for expressing semantic information of surroundings). We evaluated the performance of the proposed system through many experiments conducted in both indoor and outdoor scenarios, demonstrating the efficiency and safety of the proposed assistive system.


Author(s):  
Puru Malhotra and Vinay Kumar Saini

he paper is aimed at the design of a mobility assistive device to help the visually impaired. The traditional use of a walking stick proposes its own drawbacks and limitations. Our research is motivated by the inability of the visually impaired people to ambulate and we have made an attempt to restore their independence and reduce the trouble of carrying a stick around. We offer a hands-free wearable glass which finds it utility in real-time navigation. The design of the smart glasses includes the integration of various sensors with raspberry pi. The paper presents a detailed account of the various components and the structural design of the glasses. The novelty of our work lies in providing a complete pipeline for analysis of surroundings in real-time and hence a better solution for navigating during the day to day activities using audio instructions as output.


The majority of blind or visually impaired students in the third world countries are still using the mechanical brailler for their education. With technology advancements and electronic communication, relying on paper-based brailler would not be efficient nor productive. The "LCE Brailler" is a low-cost electronic brailler whose main features are to vocalize, braille, save and convert Braille characters typed by a blind student to alphabetical ones, which are then displayed on a computer’s monitor. In order to promote an interactive educational experience among students, teachers and parents, the proposed brailler has an affordable low price with advanced capabilities. The device’s design is simplistic and its keyboard is familiar to the blind user. It is based on the raspberry pi technology. The LCE device was tested by visually impaired students and proved to provide accurate mechanical functionality, accuracy, braille-to-text and text-to-audio blind assistant with a userfriendly graphical user interface.


2017 ◽  
Author(s):  
Rohit Takhar ◽  
Tushar Sharma ◽  
Udit Arora ◽  
Sohit Verma

In recent years, with the improvement in imaging technology, the quality of small cameras have significantly improved. Coupled with the introduction of credit-card sized single-board computers such as Raspberry Pi, it is now possible to integrate a small camera with a wearable computer. This paper aims to develop a low cost product, using a webcam and Raspberry Pi, for visually-impaired people, which can assist them in detecting and recognising pedestrian crosswalks and staircases. There are two steps involved in detection and recognition of the obstacles i.e pedestrian crosswalks and staircases. In detection algorithm, we extract Haar features from the video frames and push these features to our Haar classifier. In recognition algorithm, we first convert the RGB image to HSV and apply histogram equalization to make the pixel intensity uniform. This is followed by image segmentation and contour detection. These detected contours are passed through a pre-processor which extracts the region of interests (ROI). We applied different statistical methods on these ROI to differentiate between staircases and pedestrian crosswalks. The detection and recognition results on our datasets demonstrate the effectiveness of our system.


2019 ◽  
Vol 8 (4) ◽  
pp. 4803-4807

One of the most difficult tasks faced by the visually impaired students is identification of people. The rise in the field of image processing and the development of algorithms such as the face detection algorithm, face recognition algorithm gives motivation to develop devices that can assist the visually impaired. In this research, we represent the design and implementation of a facial recognition system for the visually impaired by using image processing. The device developed consists of a programmed raspberry pi hardware. The data is fed into the device in the form of images. The images are preprocessed and then the input image captured is processed inside the raspberry pi module using KNN algorithm, The face is recognized and the name is fed into text to speech conversion module. The visually impaired student will easily recognize the person before him using the device. Experiment results show high face detection accuracy and promising face recognition accuracy in suitable conditions. The device is built in such a way to improve cognition, interaction and communication of visually impaired students in schools and colleges. This system eliminates the need of a bulk computer since it employs a handy device with high processing power and reduced costs.


Author(s):  
Ramiz Salama ◽  
Ahmad Ayoub

Nowadays, blind or impaired people are facing a lot of problems in their daily life since it is not easy for them to move, which is very dangerous. There are about 37 million visually impaired people across the globe according to the World Health Organization. People with these problems mostly depend on others, for example, a friend, or their trained dog while movıng outside. Thus, we were motivated to develop a smart stick to solve this problem. The smart stick, integrated with an ultrasonic sensor, buzzer and vibrator, can detect obstacles in the path of the blind people. The buzzer and vibration motor are activated when any obstacle is detected to alert the blind person. This work proposes a low-cost ultrasonic smart blind stick for blind people so that they can move from one place to another in an easy, safe and independent manner. The system was designed and programmed using C language. Keywords: Arduino Uno, Arduino IDE, ultrasonic sensor, buzzer, motor.


2019 ◽  
Vol 8 (4) ◽  
pp. 1436-1440

There is increasin demand for smart widgets which make people more comfortable. Though many research works have done on current existing devices/systems for visually impaired people are not providing facilities them enough. The imperceptible people read Braille scripted books only, so here developing a new device that will assist the visually impaired people and also providing desired language reading facility. This smart assistive device will help visually impaired people gain increased independence and freedom in society. This device has an obstacle detection sensor to intimate the obstacles to the visually impaired person and a camera connected to Raspberry pi to convert image to text using Optical Character Recognition (OCR). The read data is converted to speech using text to speech synthesizer. This will useful for visually impaired people for surviving in outdoor environment as well as reading books which are in normal script. The read data can be stored in database for further reading and it can be retrieve by giving a command.


2021 ◽  
pp. 457-467
Author(s):  
Shaik Asif. Hussain ◽  
◽  
Shaik Javeed. Hussain ◽  
Raza Hasan ◽  
Salman Mahmood

Though the Traditional method of teaching Braille script for the blind is simple, yet it has some potential drawbacks. Handling the marbles and the slate for a first-time does make learning very difficult. In most cases, the teacher will also be blind, so for each representation, the teacher must reach each student’s slate and change the arrangement of the marbles. This is a harder and time taking job. This project focuses on the design and development of an embedded system based electronic assistive device which eases the problem of teaching visually challenged beginner. This Project is implemented using an ordinary Braille slate with IR sensors and Raspberry Pi 2 Model B board which is cost-effective and simple. The Software is implemented in Simulink of MATLAB R2020. The placing of the marbles in the slate is sensed by the IR Proximity sensor. If the combination of the marbles placed is correct, then the Raspberry Pi’s Text to speak converter produces the audio sound output of the corresponding letter. This method provides an easy way of teaching Braille Script with less effort for the teacher.


Sign in / Sign up

Export Citation Format

Share Document