Use of Deep Neural Network for Optical Character Recognition

Author(s):  
Abhishek Das ◽  
Mihir Narayan Mohanty

In this chapter, the authors have given a detailed review on optical character recognition. Various methods are used in this field with different accuracy levels. Still there are some difficulties in recognizing handwritten characters because of different writing styles of different individuals even in a particular language. A comparative study is given to understand different types of optical character recognition along with different methods used in each type. Implementation of neural network in different forms is found in most of the works. Different image processing techniques like OCR with CNN, RNN, combination of CNN and RNN, etc. are observed in recent research works.

Author(s):  
Abhishek Das ◽  
Mihir Narayan Mohanty

In this chapter, the authors have reviewed on optical character recognition. The study belongs to both typed characters and handwritten character recognition. Online and offline character recognition are two modes of data acquisition in the field of OCR and are also studied. As deep learning is the emerging machine learning method in the field of image processing, the authors have described the method and its application of earlier works. From the study of the recurrent neural network (RNN), a special class of deep neural network is proposed for the recognition purpose. Further, convolutional neural network (CNN) is combined with RNN to check its performance. For this piece of work, Odia numerals and characters are taken as input and well recognized. The efficacy of the proposed method is explained in the result section.


2016 ◽  
Vol 7 (4) ◽  
pp. 77-93 ◽  
Author(s):  
K.G. Srinivasa ◽  
B.J. Sowmya ◽  
D. Pradeep Kumar ◽  
Chetan Shetty

Vast reserves of information are found in ancient texts, scripts, stone tablets etc. However due to difficulty in creating new physical copies of such texts, knowledge to be obtained from them is limited to those few who have access to such resources. With the advent of Optical Character Recognition (OCR) efforts have been made to digitize such information. This increases their availability by making it easier to share, search and edit. Many documents are held back due to being damaged. This gives rise to an interesting problem of removing the noise from such documents so it becomes easier to apply OCR on them. Here the authors aim to develop a model that helps denoise images of such documents retaining on the text. The primary goal of their project is to help ease document digitization. They intend to study the effects of combining image processing techniques and neural networks. Image processing techniques like thresholding, filtering, edge detection, morphological operations, etc. will be applied to pre-process images to yield higher accuracy of neural network models.


2021 ◽  
Vol 9 (2) ◽  
pp. 73-84
Author(s):  
Md. Shahadat Hossain ◽  
Md. Anwar Hossain ◽  
AFM Zainul Abadin ◽  
Md. Manik Ahmed

The recognition of handwritten Bangla digit is providing significant progress on optical character recognition (OCR). It is a very critical task due to the similar pattern and alignment of handwriting digits. With the progress of modern research on optical character recognition, it is reducing the complexity of the classification task by several methods, a few problems encounter during recognition and wait to be solved with simpler methods. The modern emerging field of artificial intelligence is the Deep Neural Network, which promises a solid solution to these few handwritten recognition problems. This paper proposed a fine regulated deep neural network (FRDNN) for the handwritten numeric character recognition problem that uses convolutional neural network (CNN) models with regularization parameters which makes the model generalized by preventing the overfitting. This paper applied Traditional Deep Neural Network (TDNN) and Fine regulated deep neural network (FRDNN) models with a similar layer experienced on BanglaLekha-Isolated databases and the classification accuracies for the two models were 96.25% and 96.99%, respectively over 100 epochs. The network performance of the FRDNN model on the BanglaLekha-Isolated digit dataset was more robust and accurate than the TDNN model and depend on experimentation. Our proposed method is obtained a good recognition accuracy compared with other existing available methods.


Author(s):  
Farhana Ahmad Poad ◽  
Noor Shuraya Othman ◽  
Roshayati Yahya Atan ◽  
Jusrorizal Fadly Jusoh ◽  
Mumtaz Anwar Hussin

The aim of this project is to design an Automated Detection of License Plate (ADLP) system based on image processing techniques. There are two techniques that are commonly used in detecting the target, which are the Optical Character Recognition (OCR) and the split and merge segmentation. Basically, the OCR technique performs the operation using individual character of the license plate with alphanumeri characteristic. While, the split and merge segmentation technique split the image of captured plate into a region of interest. These two techniques are utilized and implemented using MATLAB software and the performance of detection is tested on the image and a comparison is done between both techniques. The results show that both techniques can perform well for license plate with some error.


2020 ◽  
Vol 32 (2) ◽  
Author(s):  
Gideon Jozua Kotzé ◽  
Friedel Wolff

As more natural language processing (NLP) applications benefit from neural network based approaches, it makes sense to re-evaluate existing work in NLP. A complete pipeline for digitisation includes several components handling the material in sequence. Image processing after scanning the document has been shown to be an important factor in final quality. Here we compare two different approaches for visually enhancing documents before Optical Character Recognition (OCR), (1) a combination of ImageMagick and Unpaper and (2) OCRopus. We also compare Calamari, a new line-based OCR package using neural networks, with the well-known Tesseract 3 as the OCR component. Our evaluation on a set of Setswana documents reveals that the combination of ImageMagick/Unpaper and Calamari improves on a current baseline based on Tesseract 3 and ImageMagick/Unpaper with over 30%, achieving a mean character error rate of 1.69 across all combined test data.


2018 ◽  
pp. 1091-1108
Author(s):  
K.G. Srinivasa ◽  
B.J. Sowmya ◽  
D. Pradeep Kumar ◽  
Chetan Shetty

Vast reserves of information are found in ancient texts, scripts, stone tablets etc. However due to difficulty in creating new physical copies of such texts, knowledge to be obtained from them is limited to those few who have access to such resources. With the advent of Optical Character Recognition (OCR) efforts have been made to digitize such information. This increases their availability by making it easier to share, search and edit. Many documents are held back due to being damaged. This gives rise to an interesting problem of removing the noise from such documents so it becomes easier to apply OCR on them. Here the authors aim to develop a model that helps denoise images of such documents retaining on the text. The primary goal of their project is to help ease document digitization. They intend to study the effects of combining image processing techniques and neural networks. Image processing techniques like thresholding, filtering, edge detection, morphological operations, etc. will be applied to pre-process images to yield higher accuracy of neural network models.


Sign in / Sign up

Export Citation Format

Share Document