This dissertation presents a system that can assist a
person with a visual impairment in both navigation and
movability. Meanwhile, number of solutions are available in
current time. We described some of them in the later part of the
paper. But to date, a reliable and cost-effective solution has not
been put forward to replace the legacy devices currently used in
mobilizing on a daily basis for people with a visual impairment.
This report first examines the problem at hand and the
motivation behind addressing it. Later, it explores relative current
technologies and research in the assistive technologies industry.
Finally, it proposes a system design and implementation for the
assistance of visually impaired people. The proposed device is
equipped with hardware like raspberry pi processor, camera,
battery, goggles, earphone, power bank and connector. Objects
will be captured with the help of camera. Image processing and
detecting would be done with the help of deep learning, R-CNN
like modules on the device itself. However, final output would be
delivered by the earphone into the visually impaired person’s ear.
The research work contains the methodology and the solutions of
above mention problem. The research works can be used in
practical use cases, for visually impaired person. The system
proposed in this project includes the use of a region based
convolutional neural network as well as the use of a raspberry pi
for processing the image data. System includes tesseract library of
programming language python for OCR and give output to the
user. The detailed methodology and result are elaborated later in
this paper.