scholarly journals Assistive guidance system for the visually impaired

Author(s):  
Rohit Takhar ◽  
Tushar Sharma ◽  
Udit Arora ◽  
Sohit Verma

In recent years, with the improvement in imaging technology, the quality of small cameras have significantly improved. Coupled with the introduction of credit-card sized single-board computers such as Raspberry Pi, it is now possible to integrate a small camera with a wearable computer. This paper aims to develop a low cost product, using a webcam and Raspberry Pi, for visually-impaired people, which can assist them in detecting and recognising pedestrian crosswalks and staircases. There are two steps involved in detection and recognition of the obstacles i.e pedestrian crosswalks and staircases. In detection algorithm, we extract Haar features from the video frames and push these features to our Haar classifier. In recognition algorithm, we first convert the RGB image to HSV and apply histogram equalization to make the pixel intensity uniform. This is followed by image segmentation and contour detection. These detected contours are passed through a pre-processor which extracts the region of interests (ROI). We applied different statistical methods on these ROI to differentiate between staircases and pedestrian crosswalks. The detection and recognition results on our datasets demonstrate the effectiveness of our system.

2017 ◽  
Author(s):  
Rohit Takhar ◽  
Tushar Sharma ◽  
Udit Arora ◽  
Sohit Verma

In recent years, with the improvement in imaging technology, the quality of small cameras have significantly improved. Coupled with the introduction of credit-card sized single-board computers such as Raspberry Pi, it is now possible to integrate a small camera with a wearable computer. This paper aims to develop a low cost product, using a webcam and Raspberry Pi, for visually-impaired people, which can assist them in detecting and recognising pedestrian crosswalks and staircases. There are two steps involved in detection and recognition of the obstacles i.e pedestrian crosswalks and staircases. In detection algorithm, we extract Haar features from the video frames and push these features to our Haar classifier. In recognition algorithm, we first convert the RGB image to HSV and apply histogram equalization to make the pixel intensity uniform. This is followed by image segmentation and contour detection. These detected contours are passed through a pre-processor which extracts the region of interests (ROI). We applied different statistical methods on these ROI to differentiate between staircases and pedestrian crosswalks. The detection and recognition results on our datasets demonstrate the effectiveness of our system.


2020 ◽  
Vol 24 (03) ◽  
pp. 515-520
Author(s):  
Vattumilli Komal Venugopal ◽  
Alampally Naveen ◽  
Rajkumar R ◽  
Govinda K ◽  
Jolly Masih

Author(s):  
Gautham G ◽  
Deepika Venkatesh ◽  
A. Kalaiselvi

In recent years, due to the increasing density of traffic every year, it is been a hassle for drivers in metropolitan cities to maintain lane and speeds on road. The drivers usually waste time and effort in idling their cars to maintain in traffic conditions. The drivers get easily frustrated when they tried to maintain the path because of the havoc created. Transportation Institute found that the odds of a crash(or near crash) more than doubled when the driver took his or her eyes off the road formore than two seconds. This tends to cause about 23% of accidents when not following their lane paths. In worst case the fuel economy often drops and tends to cause increase in pollution about 28% to 36% per vehicle annually. This corresponds to the wastage of fuel. Owing to this problem, we proposed an ingenious method by which the lane detection can be made affordable and applicable to existing automobiles. The proposed prototype of lane detection is carried over with a temporary autonomous bot which is interfaced with Raspberry pi processor, loaded with the lane detection algorithm. This prototype bot is made to get live video which is then processed by the algorithm. Also, the preliminary setups are carried over in such a way that it is easily implemented and accessible at low cost with better efficiency, providing a better impact on future automobiles.


2019 ◽  
Vol 9 (3) ◽  
pp. 224 ◽  
Author(s):  
Dimitrios Loukatos ◽  
Konstantinos G. Arvanitis

Inspired by the mobile phone market boost, several low cost credit card-sized computers have made the scene, able to support educational applications with artificial intelligence features, intended for students of various levels. This paper describes the learning experience and highlights the technologies used to improve the function of DIY robots. The paper also reports on the students’ perceptions of this experience. The students participating in this problem based learning activity, despite having a weak programming background and a confined time schedule, tried to find efficient ways to improve the DIY robotic vehicle construction and better interact with it. Scenario cases under investigation, mainly via smart phones or tablets, involved from touch button to gesture and voice recognition methods exploiting modern AI techniques. The robotic platform used generic hardware, namely arduino and raspberry pi units, and incorporated basic automatic control functionality. Several programming environments, from MIT app inventor to C and python, were used. Apart from cloud based methods to tackle the voice recognition issues, locally running software alternatives were assessed to provide better autonomy. Typically, scenarios were performed through Wi-Fi interfaces, while the whole functionality was extended by using LoRa interfaces, to improve the robot’s controlling distance. Through experimentation, students were able to apply cutting edge technologies, to construct, integrate, evaluate and improve interaction with custom robotic vehicle solutions. The whole activity involved technologies similar to the ones making the scene in the modern agriculture era that students need to be familiar with, as future professionals.


The majority of blind or visually impaired students in the third world countries are still using the mechanical brailler for their education. With technology advancements and electronic communication, relying on paper-based brailler would not be efficient nor productive. The "LCE Brailler" is a low-cost electronic brailler whose main features are to vocalize, braille, save and convert Braille characters typed by a blind student to alphabetical ones, which are then displayed on a computer’s monitor. In order to promote an interactive educational experience among students, teachers and parents, the proposed brailler has an affordable low price with advanced capabilities. The device’s design is simplistic and its keyboard is familiar to the blind user. It is based on the raspberry pi technology. The LCE device was tested by visually impaired students and proved to provide accurate mechanical functionality, accuracy, braille-to-text and text-to-audio blind assistant with a userfriendly graphical user interface.


Author(s):  
Maoyue Li ◽  
Yonghao Xu ◽  
Zengtao Chen ◽  
Kangsheng Ma ◽  
Lifei Liu

Aiming at some shortcomings of the existing non-contact two-dimensional high-precision measurement methods, this paper proposes a two-dimensional high-precision non-contact automatic measurement method based on the corner coordinates of the image. Firstly, this paper designs a set of simple image acquisition device and explains the advantages of the Canny operator used in the image contour detection algorithm. Subsequently, this paper proposes a dimension calibration algorithm based on image corner coordinates, which can convert the pixel size to the actual size, and achieves the function of the algorithm by hierarchical, multi-step processing of the image. Finally, in order to realize the intelligent positioning and selection of the standard size workpiece position, an automatic measurement and positioning system is designed, which can convert the actual size signal into the pulse time control signal. The experimental results show that the measurement method proposed in this paper has the advantages of fast measurement speed, high robustness, low cost and high degree of automation. When using a black-and-white checkerboard paper with an accuracy of 0.1 mm, the measurement accuracy can reach the micron level.


Author(s):  
Raghad Raied Mahmood Et al.

It is relatively simple for a normal human to interpret and understand every banknote, but one of the major problems for visually impaired people are money recognition, especially for paper currency. Since money plays such an important role in our everyday lives and is required for every business transaction, real-time detection and recognition of banknotes become a necessity for blind or visually impaired people For that purpose, we propose a real-time object detection system to help visually impaired people in their daily business transactions. Dataset Images of the Iraqi banknote category are collected in different conditions initially and then, these images are augmented with different geometric transformations, to make the system strong. These augmented images are then annotated manually using the "LabelImg" program, from which training sets and validation image sets are prepared. We will use YOLOv3 real-time Object Detection algorithm trained on custom Iraqi banknote dataset for detection and recognition of banknotes. Then the label of the banknotes is identified and then converted into audio by using Google Text to Speech (gTTS), which will be the expected output. The performance of the trained model is evaluated on a test dataset and real-time live video. The test results demonstrate that the proposed method can detect and recognize Iraqi paper money with high mAP reaches 97.405% and a short time.


2021 ◽  
Vol 7 ◽  
pp. e402
Author(s):  
Zaid Saeb Sabri ◽  
Zhiyong Li

Smart surveillance systems are used to monitor specific areas, such as homes, buildings, and borders, and these systems can effectively detect any threats. In this work, we investigate the design of low-cost multiunit surveillance systems that can control numerous surveillance cameras to track multiple objects (i.e., people, cars, and guns) and promptly detect human activity in real time using low computational systems, such as compact or single board computers. Deep learning techniques are employed to detect certain objects to surveil homes/buildings and recognize suspicious and vital events to ensure that the system can alarm officers of relevant events, such as stranger intrusions, the presence of guns, suspicious movements, and identified fugitives. The proposed model is tested on two computational systems, specifically, a single board computer (Raspberry Pi) with the Raspbian OS and a compact computer (Intel NUC) with the Windows OS. In both systems, we employ components, such as a camera to stream real-time video and an ultrasonic sensor to alarm personnel of threats when movement is detected in restricted areas or near walls. The system program is coded in Python, and a convolutional neural network (CNN) is used to perform recognition. The program is optimized by using a foreground object detection algorithm to improve recognition in terms of both accuracy and speed. The saliency algorithm is used to slice certain required objects from scenes, such as humans, cars, and airplanes. In this regard, two saliency algorithms, based on local and global patch saliency detection are considered. We develop a system that combines two saliency approaches and recognizes the features extracted using these saliency techniques with a conventional neural network. The field results demonstrate a significant improvement in detection, ranging between 34% and 99.9% for different situations. The low percentage is related to the presence of unclear objects or activities that are different from those involving humans. However, even in the case of low accuracy, recognition and threat identification are performed with an accuracy of 100% in approximately 0.7 s, even when using computer systems with relatively weak hardware specifications, such as a single board computer (Raspberry Pi). These results prove that the proposed system can be practically used to design a low-cost and intelligent security and tracking system.


2020 ◽  
Vol 27 (1) ◽  
Author(s):  
JS Igwe ◽  
C Chukwuemeka ◽  
C Ituma ◽  
NH Ogbu

Guiding the visually impaired persons is always very tasking. Smart Canes previously designed for the blinds are relatively costly. Also, most of the available Smart Cane Systems used only remote control and buzzing methods for locating the cane if misplaced. However, no research work has handled how to locate the cane should the remote control itself be misplaced. This work designed an Indigenous Smart Cane System for the Visually Impaired with Sound Clap Location Ability. It is relatively less costly, easy to learn, and can be located by a natural means if misplaced. The system uses one ATMEGA328P Microcontroller which is programmed to control both input and output signals. It was designed with three low-cost 0.3m resolution HC-SR04 Ultrasonic Sensors to detect obstacles within the range of 2cm – 400cm in the front, left and right directions of the user and sends the current obstacle distance signal to the controller for processing. Receiving this signal, the controller determines which of the output devices (Piezo Speaker, Earphone and Vibrator Motor) to use and communicate the object distance to the user. In case the cane is misplaced, the user makes a sound clap which triggers the Sound Sensor to send signal to the controller for an audio feedback. This audio feedback is done by the Piezo Speaker. The system was programmed to use a very simple object – detection algorithm to help the user learn how to use the cane easily. This system has been tested and found to be relatively less costly, easy to learn and can be located with a clap sound. Keywords: Sound-Clap Location Ability, Visually Impaired, Indigenous Smart Cane, Sensors, Audio Feedback; Object Distance; Microcontroller.


2020 ◽  
Vol 17 (4) ◽  
pp. 1863-1866
Author(s):  
M. C. Jobin Christ ◽  
A. Lakshmi Narayanan ◽  
S. Jothiraj

This paper explains an IoT controlled remote monitoring system. For the implementation of this system, Raspberry Pi is used. Raspberry Pi is a credit card sized single board computer, which has ARM11 microprocessor. A system is developed to continuously monitor the Electrocardiogram (ECG) and other vital parameters. The measured data is stored in a database and it can be displayed in a website that can be accessed only by authorized personnel like physicians, caretakers, etc. The primary objective of this setup is to update the data to the database and alert the physicians for any aberrancy. MySQLdb module is used to link Raspberry pi to the database. Alert message is sent by the combination of Raspberry Pi and GSM module. This system has more future scope as the data gathered by monitoring is very much important and can be used for scientific research by the medical community. By determining the patterns in the parameters observed, the nature of arrhythmias can be predicted. The paper mainly focuses on the system design and the algorithm used to accomplish the task.


Sign in / Sign up

Export Citation Format

Share Document