Image Processing Strategies Based on Deep Neural Network for Simulated Prosthetic Vision

Author(s):  
Ying Zhao ◽  
Qi Li ◽  
Donghui Wang ◽  
Aiping Yu
2021 ◽  
Vol 2074 (1) ◽  
pp. 012083
Author(s):  
Xiangli Lin

Abstract With the vigorous development of electronic technology and computer technology, as well as the continuous advancement of research in the fields of neurophysiology, bionics and medicine, the artificial visual prosthesis has brought hope to the blind to restore their vision. Artificial optical prosthesis research has confirmed that prosthetic vision can restore part of the visual function of patients with non-congenital blindness, but the mechanism of early prosthetic image processing still needs to be clarified through neurophysiological research. The purpose of this article is to study neurophysiology based on deep neural networks under simulated prosthetic vision. This article uses neurophysiological experiments and mathematical statistical methods to study the vision of simulated prostheses, and test and improve the image processing strategies used to simulate the visual design of prostheses. In this paper, based on the low-pixel image recognition of the simulating irregular phantom view point array, the deep neural network is used in the image processing strategy of prosthetic vision, and the effect of the image processing method on object image recognition is evaluated by the recognition rate. The experimental results show that the recognition rate of the two low-pixel segmentation and low-pixel background reduction methods proposed by the deep neural network under simulated prosthetic vision is about 70%, which can significantly increase the impact of object recognition, thereby improving the overall recognition ability of visual guidance.


2021 ◽  
Vol 28 ◽  
pp. 344-348
Author(s):  
Guanzhong Tian ◽  
Jun Chen ◽  
Xianfang Zeng ◽  
Yong Liu

Author(s):  
Abhishek Das ◽  
Mihir Narayan Mohanty

In this chapter, the authors have reviewed on optical character recognition. The study belongs to both typed characters and handwritten character recognition. Online and offline character recognition are two modes of data acquisition in the field of OCR and are also studied. As deep learning is the emerging machine learning method in the field of image processing, the authors have described the method and its application of earlier works. From the study of the recurrent neural network (RNN), a special class of deep neural network is proposed for the recognition purpose. Further, convolutional neural network (CNN) is combined with RNN to check its performance. For this piece of work, Odia numerals and characters are taken as input and well recognized. The efficacy of the proposed method is explained in the result section.


Author(s):  
Xiaoli Sun ◽  
Yang Hai ◽  
Xiujun Zhang ◽  
Chen Xu ◽  
Min Li

Defocus blur detection aims at separating regions on focus from out-of-focus for image processing. With today’s popularity of mobile phones with portrait mode, accurate defocus blur detection has received more and more attention. There are many challenges that we currently confront, such as blur boundaries of defocus regions, interference of messy backgrounds and identification of large flat regions. To address these issues, in this paper, we propose a new deep neural network with both global and local pathways for defocus blur detection. In global pathway, we locate the objects on focus by semantical search. In local pathway, we refine the predicted blur regions via multi-scale supervisions. In addition, the refined results in local pathway are fused with searching results in global pathway by a simple concatenation operation. The structure of our new network is developed in a feasible way and its function appears to be quite effective and efficient, which is suitable for the deployment on mobile devices. It takes about 0.2[Formula: see text]s per image on a regular personal laptop. Experiments on both CUHK dataset and our newly proposed Defocus400 dataset show that our model outperforms existing state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document