Development of a wearable guide device based on convolutional neural network for blind or visually impaired persons

2020 ◽  
Vol 79 (39-40) ◽  
pp. 29473-29491
Author(s):  
Yi-Zeng Hsieh ◽  
Shih-Syun Lin ◽  
Fu-Xiong Xu
Author(s):  
Melchiezhedhieck J. Bongao ◽  
◽  
Arvin F. Almadin ◽  
Christian L. Falla ◽  
Juan Carlo F. Greganda ◽  
...  

This Raspberry Single-Board Computer-Based Object and Text Real-time Recognition Wearable Device using Convolutional Neural Network through TensorFlow Deep Learning, Python and C++ programming languages, and SQLite database application, which detect stationary objects, road signs and Philippine (PHP) money bills, and recognized texts through camera and translate it to audible outputs such as English and Filipino languages. Moreover, the system has a battery notification status using an Arduino microcontroller unit. It also has a switch for object detection mode, text recognition mode, and battery status report mode. This could fulfill the incapability of visually impaired in identifying of objects and the lack of reading ability as well as reducing the assistance that visually impaired needs. Descriptive quantitative research, Waterfall System Development Life Cycle and Evolutionary Prototyping Models were used as the methodologies of this study. Visually impaired persons and the Persons with Disability Affairs Office of the City Government of Biñan, Laguna, Philippines served as the main respondents of the survey conducted. Obtained results stipulated that the object detection, text recognition, and its attributes were accurate and reliable, which gives a significant distinction from the current system to detect objects and recognize printed texts for the visually impaired people.


2020 ◽  
Author(s):  
S Kashin ◽  
D Zavyalov ◽  
A Rusakov ◽  
V Khryashchev ◽  
A Lebedev

2018 ◽  
Vol 2018 (9) ◽  
pp. 202-1-202-6 ◽  
Author(s):  
Edward T. Scott ◽  
Sheila S. Hemami

2020 ◽  
Vol 2020 (10) ◽  
pp. 181-1-181-7
Author(s):  
Takahiro Kudo ◽  
Takanori Fujisawa ◽  
Takuro Yamaguchi ◽  
Masaaki Ikehara

Image deconvolution has been an important issue recently. It has two kinds of approaches: non-blind and blind. Non-blind deconvolution is a classic problem of image deblurring, which assumes that the PSF is known and does not change universally in space. Recently, Convolutional Neural Network (CNN) has been used for non-blind deconvolution. Though CNNs can deal with complex changes for unknown images, some CNN-based conventional methods can only handle small PSFs and does not consider the use of large PSFs in the real world. In this paper we propose a non-blind deconvolution framework based on a CNN that can remove large scale ringing in a deblurred image. Our method has three key points. The first is that our network architecture is able to preserve both large and small features in the image. The second is that the training dataset is created to preserve the details. The third is that we extend the images to minimize the effects of large ringing on the image borders. In our experiments, we used three kinds of large PSFs and were able to observe high-precision results from our method both quantitatively and qualitatively.


Sign in / Sign up

Export Citation Format

Share Document