scholarly journals Photometric Ligature Extraction Technique for Urdu Optical Character Recognition

2021 ◽  
Vol 11 (6) ◽  
pp. 7968-7973
Author(s):  
M. Kazmi ◽  
F. Yasir ◽  
S. Habib ◽  
M. S. Hayat ◽  
S. A. Qazi

Urdu Optical Character Recognition (OCR) based on character level recognition (analytical approach) is less popular as compared to ligature level recognition (holistic approach) due to its added complexity, characters and strokes overlapping. This paper presents a holistic approach Urdu ligature extraction technique. The proposed Photometric Ligature Extraction (PLE) technique is independent of font size and column layout and is capable to handle non-overlapping and all inter and intra overlapping ligatures. It uses a customized photometric filter along with the application of X-shearing and padding with connected component analysis, to extract complete ligatures instead of extracting primary and secondary ligatures separately. A total of ~ 2,67,800 ligatures were extracted from scanned Urdu Nastaliq printed text images with an accuracy of 99.4%. Thus, the proposed framework outperforms the existing Urdu Nastaliq text extraction and segmentation algorithms. The proposed PLE framework can also be applied to other languages using the Nastaliq script style, languages such as Arabic, Persian, Pashto, and Sindhi.

2017 ◽  
Vol 11 (1) ◽  
pp. 193-200
Author(s):  
Brahim Sabir ◽  
Yassine Khazri ◽  
Mohamed Moussetad ◽  
Bouzekri Touri

Background:Optical character Recognition (OCR) is a technic that converts scanned or printed text images into editable text. Many OCR solutions have been proposed and used for Latin and Chinese alphabets.However not much can be found about OCRs for the handwriting scripts Arabic Alphabets, and especially to be used for blind and visually impaired persons.This paper has been an attempt towards the development of an OCR for Arabic Alphabets dedicated to blind and visually impaired persons.Method:The proposed Optical Arabic Alphabets Recognition algorithm includes binarization of the inputted image, segmentation, feature extraction and a classification based on neural networks to match read Arabic alphabets with trained pattern.The proposed algorithm has been developed using Matlab, and the solution was designed to be implemented on hardware platform and can be customized for mobile phones.Conclusion:The presented method has the benefit that the accuracy of recognition is comparable to other OCR algorithms.


2015 ◽  
Vol 15 (01) ◽  
pp. 1550002
Author(s):  
Brij Mohan Singh ◽  
Rahul Sharma ◽  
Debashis Ghosh ◽  
Ankush Mittal

In many documents such as maps, engineering drawings and artistic documents, etc. there exist many printed as well as handwritten materials where text regions and text-lines are not parallel to each other, curved in nature, and having various types of text such as different font size, text and non-text areas lying close to each other and non-straight, skewed and warped text-lines. Optical character recognition (OCR) systems available commercially such as ABYY fine reader and Free OCR, are not capable of handling different ranges of stylistic document images containing curved, multi-oriented, and stylish font text-lines. Extraction of individual text-lines and words from these documents is generally not straight forward. Most of the segmentation works reported is on simple documents but still it remains a highly challenging task to implement an OCR that works under all possible conditions and gives highly accurate results, especially in the case of stylistic documents. This paper presents dilation and flood fill morphological operations based approach that extracts multi-oriented text-lines and words from the complex layout or stylistic document images in the subsequent stages. The segmentation results obtained from our method proves to be superior over the standard profiling-based method.


Author(s):  
Janarthanan A ◽  
Pandiyarajan C ◽  
Sabarinathan M ◽  
Sudhan M ◽  
Kala R

Optical character recognition (OCR) is a process of text recognition in images (one word). The input images are taken from the dataset. The collected text images are implemented to pre-processing. In pre-processing, we can implement the image resize process. Image resizing is necessary when you need to increase or decrease the total number of pixels, whereas remapping can occur when you are zooming refers to increase the quantity of pixels, so that when you zoom an image, you will see clear content. After that, we can implement the segmentation process. In segmentation, we can segment the each characters in one word. We can extract the features values from the image that means test feature. In classification process, we have to classify the text from the image. Image classification is performed the images in order to identify which image contains text. A classifier is used to identify the image containing text. The experimental results shows that the accuracy.


2021 ◽  
Vol 4 (1) ◽  
pp. 57-70
Author(s):  
Marina V. Polyakova ◽  
Alexandr G. Nesteryuk

Optical character recognition systems for the images are used to convert books and documents into electronic form, to automate accounting systems in business, when recognizing markers using augmented reality technologies and etс. The quality of optical character recognition, provided that binarization is applied, is largely determined by the quality of separation of the foreground pixels from the background. Methods of text image binarization are analyzed and insufficient quality of binarization is noted. As a way of research the minimum-distance classifier for the improvement of the existing method of binarization of color text images is used. To improve the quality of the binarization of color text images, it is advisable to divide image pixels into two classes, “Foreground” and “Background”, to use classification methods instead of heuristic threshold selection, namely, a minimum-distance classifier. To reduce the amount of processed information before applying the classifier, it is advisable to select blocks of pixels for subsequent processing. This was done by analyzing the connected components on the original image. An improved method of the color text image binarization with the use of analysis of connected components and minimum-distance classifier has been elaborated. The research of the elaborated method showed that it is better than existing binarization methods in terms of robustness of binarization, but worse in terms of the error of the determining the boundaries of objects. Among the recognition errors, the pixels of images from the class labeled “Foreground” were more often mistaken for the class labeled “Background”. The proposed method of binarization with the uniqueness of class prototypes is recommended to be used in problems of the processing of color images of the printed text, for which the error in determining the boundaries of characters as a result of binarization is compensated by the thickness of the letters. With a multiplicity of class prototypes, the proposed binarization method is recommended to be used in problems of processing color images of handwritten text, if high performance is not required. The improved binarization method has shown its efficiency in cases of slow changes in the color and illumination of the text and background, however, abrupt changes in color and illumination, as well as a textured background, do not allowing the binarization quality required for practical problems.


Optical Character Recognition or Optical Character Reader (OCR) is a pattern-based method consciousness that transforms the concept of electronic conversion of images of handwritten text or printed text in a text compiled. Equipment or tools used for that purpose are cameras and apartment scanners. Handwritten text is scanned using a scanner. The image of the scrutinized document is processed using the program. Identification of manuscripts is difficult compared to other western language texts. In our proposed work we will accept the challenge of identifying letters and letters and working to achieve the same. Image Preprocessing techniques can effectively improve the accuracy of an OCR engine. The goal is to design and implement a machine with a learning machine and Python that is best to work with more accurate than OCR's pre-built machines with unique technologies such as MatLab, Artificial Intelligence, Neural networks, etc.


2019 ◽  
Vol 16 (2) ◽  
pp. 0409
Author(s):  
Ali Et al.

            Human Interactive Proofs (HIPs) are automatic inverse Turing tests, which are intended to differentiate between people and malicious computer programs. The mission of making good HIP system is a challenging issue, since the resultant HIP must be secure against attacks and in the same time it must be practical for humans. Text-based HIPs is one of the most popular HIPs types. It exploits the capability of humans to recite text images more than Optical Character Recognition (OCR), but the current text-based HIPs are not well-matched with rapid development of computer vision techniques, since they are either vey simply passed or very hard to resolve, thus this motivate that continuous efforts are required to improve the development of HIPs base text. In this paper, a new proposed scheme is designed for animated text-based HIP; this scheme exploits the gap between the usual perception of human and the ability of computer to mimic this perception and to achieve more secured and more human usable HIP. This scheme could prevent attacks since it's hard for the machine to distinguish characters with animation environment displayed by digital video, but it's certainly still easy and practical to be used by humans because humans are attuned to perceiving motion easily. The proposed scheme has been tested by many Optical Character Recognition applications, and it overtakes all these tests successfully and it achieves a high usability rate of 95%.


2019 ◽  
Vol 16 (2) ◽  
pp. 0409
Author(s):  
Ali Et al.

            Human Interactive Proofs (HIPs) are automatic inverse Turing tests, which are intended to differentiate between people and malicious computer programs. The mission of making good HIP system is a challenging issue, since the resultant HIP must be secure against attacks and in the same time it must be practical for humans. Text-based HIPs is one of the most popular HIPs types. It exploits the capability of humans to recite text images more than Optical Character Recognition (OCR), but the current text-based HIPs are not well-matched with rapid development of computer vision techniques, since they are either vey simply passed or very hard to resolve, thus this motivate that continuous efforts are required to improve the development of HIPs base text. In this paper, a new proposed scheme is designed for animated text-based HIP; this scheme exploits the gap between the usual perception of human and the ability of computer to mimic this perception and to achieve more secured and more human usable HIP. This scheme could prevent attacks since it's hard for the machine to distinguish characters with animation environment displayed by digital video, but it's certainly still easy and practical to be used by humans because humans are attuned to perceiving motion easily. The proposed scheme has been tested by many Optical Character Recognition applications, and it overtakes all these tests successfully and it achieves a high usability rate of 95%.


Author(s):  
Md. Anwar Hossain ◽  
Sadia Afrin

This paper presents an innovative design for Optical Character Recognition (OCR) from text images by using the Template Matching method.OCR is an important research area and one of the most successful applications of technology in the field of pattern recognition and artificial intelligence.OCR provides full alphanumeric visualization of printed and handwritten characters by scanning text images and converts it into a corresponding editable text document. The main objective of this system prototype is to develop a prototype for the OCR system and to implement The Template Matching algorithm for provoking the system prototype. In this paper, we took alphabet (A-Z and a-z), and numbers (0-1), grayscale images, bitmap image format were used and recognized the alphabet and numbers by comparing between two images. Besides, we checked accuracy for different fonts of alphabet and numbers. Here we used Matlab R 2018 a software for the proper implementation of the system.


Sign in / Sign up

Export Citation Format

Share Document