Text recognition and face detection aid for visually impaired person using Raspberry PI

Author(s):  
M. Rajesh ◽  
Bindhu K. Rajan ◽  
Ajay Roy ◽  
K. Almaria Thomas ◽  
Ancy Thomas ◽  
...  
Author(s):  
Mrs. Ritika Dhabliya

Autonomous travel is a notable test for visually impaired people and furthermore the expanding accessibility of cost proficiency, superior and versatile advanced imaging gadgets has made a gigantic open door for enhancing conventional checking for record picture securing. We propose a camera based visual help system utilizing raspberry pi for content perusing, movement of items and the sentiments of outwardly hindered people face various challenges to play out their everyday errand. They are absolutely or halfway subject to somebody for help. Their issues have made them to lose their would like to live in this contending society. They look for help from others to control them entire day. This paper expects to make the outwardly debilitated individual completely autonomous in all perspectives. The proposed framework depends on a virtual eye, which conveys to the outside encompassing through a camera. The camera goes about as a consistent wellspring of data to the framework. The information is gotten through the camera. They got signals from the information gadgets are dissected utilizing picture handling in LABVIEW and it reacts to the outwardly debilitated individual through discourse preparing units. The handled data about environmental factors will be educated through the speaker (yield unit) by which outwardly weakened individuals can move and make their work effectively all alone. Also the outwardly weakened individual can naturally control a portion of the home apparatuses like fan utilizing remote correspondence framework.


Author(s):  
Tejal Adep ◽  
Rutuja Nikam ◽  
Sayali Wanewe ◽  
Dr. Ketaki B. Naik

Blind people face the problem in daily life. They can't even walk without any aid. Many times they rely on others for help. Several technologies for the assistance of visually impaired people have been developed. Among the various technologies being utilized to assist the blind, Computer Vision-based solutions are emerging as one of the most promising options due to their affordability and accessibility. This paper proposes a system for visually impaired people. The proposed system aims to create a wearable visual aid for visually impaired people in which speech commands are accepted by the user. Its functionality addresses the identification of objects and signboards. This will help the visually impaired person to manage day-to-day activities and navigate through his/her surroundings. Raspberry Pi is used to implement artificial vision using python language on the Open CV platform.


2019 ◽  
Vol 8 (4) ◽  
pp. 4803-4807

One of the most difficult tasks faced by the visually impaired students is identification of people. The rise in the field of image processing and the development of algorithms such as the face detection algorithm, face recognition algorithm gives motivation to develop devices that can assist the visually impaired. In this research, we represent the design and implementation of a facial recognition system for the visually impaired by using image processing. The device developed consists of a programmed raspberry pi hardware. The data is fed into the device in the form of images. The images are preprocessed and then the input image captured is processed inside the raspberry pi module using KNN algorithm, The face is recognized and the name is fed into text to speech conversion module. The visually impaired student will easily recognize the person before him using the device. Experiment results show high face detection accuracy and promising face recognition accuracy in suitable conditions. The device is built in such a way to improve cognition, interaction and communication of visually impaired students in schools and colleges. This system eliminates the need of a bulk computer since it employs a handy device with high processing power and reduced costs.


This dissertation presents a system that can assist a person with a visual impairment in both navigation and movability. Meanwhile, number of solutions are available in current time. We described some of them in the later part of the paper. But to date, a reliable and cost-effective solution has not been put forward to replace the legacy devices currently used in mobilizing on a daily basis for people with a visual impairment. This report first examines the problem at hand and the motivation behind addressing it. Later, it explores relative current technologies and research in the assistive technologies industry. Finally, it proposes a system design and implementation for the assistance of visually impaired people. The proposed device is equipped with hardware like raspberry pi processor, camera, battery, goggles, earphone, power bank and connector. Objects will be captured with the help of camera. Image processing and detecting would be done with the help of deep learning, R-CNN like modules on the device itself. However, final output would be delivered by the earphone into the visually impaired person’s ear. The research work contains the methodology and the solutions of above mention problem. The research works can be used in practical use cases, for visually impaired person. The system proposed in this project includes the use of a region based convolutional neural network as well as the use of a raspberry pi for processing the image data. System includes tesseract library of programming language python for OCR and give output to the user. The detailed methodology and result are elaborated later in this paper.


Author(s):  
Sagor Saha ◽  
Farhan Hossain Shakal ◽  
Mufrath Mahmood

The loss of vision restrained the visually impaired people from performing their daily task. This issue has impeded their free-movement and turned them into dependent a person. People in this sector did not face technologies revamping their situations. With the advent of computer vision, artificial intelligence, the situation improved to a great extent. The propounded design is an implementation of a wearable device which is capable of performing a lot of features. It is employed to provide visual instinct by recognizing objects, identifying the face of choices. The device runs a pre-trained model to classify common objects from household items to automobiles items. Optical character recognition and Google translate were executed to read any text from image and convert speech of the user to text respectively. Besides, the user can search for an interesting topic by the command in the form of speech. Additionally, ultrasonic sensors were kept fixed at three positions to sense the obstacle during navigation. The display attached help in communication with deaf person and GPS and GSM module aid in tracing the user. All these features run by voice commands which are passed through the microphone of any earphone. The visual input is received through the camera and the computation task is processed in the raspberry pi board. However, the device seemed to be effective during the test and validation.


2018 ◽  
Vol 7 (3.1) ◽  
pp. 82
Author(s):  
Jaichandran R ◽  
Somasundaram K ◽  
Bhagyashree Basfore ◽  
Menaka I.S ◽  
Uma S

This paper presents a prototype to help visually impaired persons in reading printed learning materials using Raspberry PI. Tesseract an open source optical character recognition technique is used extract texts in printed images and converted to audio output using text-to-speech conversion software. Prototype is experimented using printed text pages with various font sizes and line spacing as test cases. Results show that the prototype is better in converting printed texts to speech. However quality of image, font size, and line space affects performance of prototype in converting printed texts to speech.. 


2012 ◽  
Vol 2 (8) ◽  
pp. 181-182
Author(s):  
Tanika Gupta ◽  
◽  
Sakshi Jain ◽  
Rajat Bhatia ◽  
Ms. Anuradha Ms. Anuradha

2020 ◽  
Vol 13 (2) ◽  
pp. 55-60
Author(s):  
Fariz Fadhlillah

One of the ideal public transportation facilities for the visually impaired in daily activities is trains. To be used at maximum, there is a need for communicative media to support the independence of orientation and mobility for the visually impaired in the train station. The media plays a role in supporting visually impaired individuals to know where they are, where to go, and how to reach the destination. The previous result regarding visually impaired ability to identify pictorial form which is designed with Primadi Tabrani’s ancient visual language semiotic approach shows a great opportunity for a pictogram to be the solution. However, the challenge is how to make the visually impaired person understand the meaning description that has been designed into tactile pictogram by touch. Basic consideration in designing process is the clarity of visual form when being touched, which is influenced by the way the shape is drawn and the tactile height


Author(s):  
Deepti Patole ◽  
Sunayana Jadhava ◽  
Khyatee Thakkar ◽  
Saket Gupta ◽  
Shardul Aeer

Sign in / Sign up

Export Citation Format

Share Document