Optical Character Recognition of Postmark Date Based on Machine Vision

2012 ◽  
Vol 424-425 ◽  
pp. 1107-1111
Author(s):  
Fu Cheng You ◽  
Ying Jie Liu

For the purpose of information management on postmark according to the date, the paper put forward a method of postmark date recognition based on machine vision, which could meet the demands of personal postmark collectors. On the basis of the relative theories of machine vision, image processing and pattern recognition, the overall process is introduced in the paper from postmark image acquisition to date recognition. Firstly, threshold method is used to generate binary image from smoothed postmark image. So region of date numbers could be extracted from binary image according to different region features. Then regions of date numbers which are connected or broken could be processed through mathematical morphology of binary image. Individual regions of date numbers are obtained for recognition. Finally, classification and pattern recognition based on support vector machine make date numbers classified and date recognition is implemented correctly

2010 ◽  
Vol 171-172 ◽  
pp. 73-77
Author(s):  
Ying Jie Liu ◽  
Fu Cheng You

It is difficult to process touching or broken characters in practical applications on optical character recognition. For touching or broken characters, a method based on mathematical morphology of binary image is put forward in the paper. On the basis of the relative theories of digital image processing, the overall process is introduced including separation of touching characters and connection of broken characters. First of all, character image is pre-processed through smoothing and threshold segmentation in order to generate binary image of characters. Then character regions which are touching or broken are processed through different operators of mathematical morphology of binary image by different structuring elements. Thus the touching characters are separated and broken characters are connected. For higher recognition rate, further processes are done to achieve normal and individual character regions.


License plate recognition system plays very important role in various security aspects which includes entry monitoring of a particular vehicle in commercial complex, traffic monitoring , identification of threats and many more. In past few years many different methods has been adopted for license plate recognition system but still there is little more chance to work on real time difficulties which come across while license plate recognition like speed of vehicle, angle of license plate in picture, background of picture or color contrast of image, reflection on the license plate and so on. The combination of object detection, image processing, and pattern recognition are used to fulfill this application. In the proposed architecture , system will capture a small video and using Google's OCR(Optical Character Recognition) system will recognize license number, if that number get found in database gate will get open with the help of Arduino Uno.


1994 ◽  
Vol 04 (01) ◽  
pp. 193-207 ◽  
Author(s):  
VADIM BIKTASHEV ◽  
VALENTIN KRINSKY ◽  
HERMANN HAKEN

The possibility of using nonlinear media as a highly parallel computation tool is discussed, specifically for image classification and recognition. Some approaches of this type are known, that are based on stationary dissipative structures which can “measure” scalar products of images. In this paper, we exploit the analogy between binary images and point sets, and use the Hausdorff metrics for comparing the images. It does not require the measure at all, and is based only on the metrics of the space whose subsets we consider. In addition to Hausdorff distance, we suggest a new “nonlinear” version of this distance for comparison of images, called “autowave” distance. This distance can be calculated very easily and yields some additional advantages for pattern recognition (e.g. noise tolerance). The method was illustrated for the problem of machine reading (Optical Character Recognition). It was compared with some famous OCR programs for PC. On a medium quality xerocopy of a journal page, in the same conditions of learning and recognition, the autowave approach resulted in much fewer mistakes. The method can be realized using only one chip with simple uniform connection of the elements. In this case, it yields an increase in computation speed of several orders of magnitude.


Theoretical—This paper shows a camera based assistive content perusing of item marks from articles to support outwardly tested individuals. Camera fills in as fundamental wellspring of info. To recognize the items, the client will move the article before camera and this moving item will be identified by Background Subtraction (BGS) Method. Content district will be naturally confined as Region of Interest (ROI). Content is extricated from ROI by consolidating both guideline based and learning based technique. A tale standard based content limitation calculation is utilized by recognizing geometric highlights like pixel esteem, shading force, character size and so forth and furthermore highlights like Gradient size, slope width and stroke width are found out utilizing SVM classifier and a model is worked to separate content and non-content area. This framework is coordinated with OCR (Optical Character Recognition) to extricate content and the separated content is given as a voice yield to the client. The framework is assessed utilizing ICDAR-2011 dataset which comprise of 509 common scene pictures with ground truth.


2019 ◽  
Vol 8 (3) ◽  
pp. 6873-6880

Palm leaf manuscripts has been one of the ancient writing methods but the palm leaf manuscripts content requires to be inscribed in a new set of leaves. This study has provided a solution to save the contents in palm leaf manuscripts by recognizing the handwritten Tamil characters in manuscripts and storing them digitally. Character recognition is one of the most essential fields of pattern recognition and image processing. Generally Optical character recognition is the method of e-translation of typewritten text or handwritten images into machine editable text. The handwritten Tamil character recognition has been one of the challenging and active areas of research in the field of pattern recognition and image processing. In this study a trial was made to identify Tamil handwritten characters without extraction of feature using convolutional neural networks. This study uses convolutional neural networks for recognizing and classifying the Tamil palm leaf manuscripts of characters from separated character images. The convolutional neural network is a deep learning approach for which it does not need to retrieve features and also a rapid approach for character recognition. In the proposed system every character is expanded to needed pixels. The expanded characters have predetermined pixels and these pixels are considered as characteristics for neural network training. The trained network is employed for recognition and classification. Convolutional Network Model development contains convolution layer, Relu layer, pooling layer, fully connected layer. The ancient Tamil character dataset of 60 varying class has been created. The outputs reveal that the proposed approach generates better rates of recognition than that of schemes based on feature extraction for handwritten character recognition. The accuracy of the proposed approach has been identified as 97% which shows that the proposed approach is effective in terms of recognition of ancient characters.


Author(s):  
Abhishek Das ◽  
Mihir Narayan Mohanty

In this chapter, the authors have reviewed on optical character recognition. The study belongs to both typed characters and handwritten character recognition. Online and offline character recognition are two modes of data acquisition in the field of OCR and are also studied. As deep learning is the emerging machine learning method in the field of image processing, the authors have described the method and its application of earlier works. From the study of the recurrent neural network (RNN), a special class of deep neural network is proposed for the recognition purpose. Further, convolutional neural network (CNN) is combined with RNN to check its performance. For this piece of work, Odia numerals and characters are taken as input and well recognized. The efficacy of the proposed method is explained in the result section.


Sign in / Sign up

Export Citation Format

Share Document