scholarly journals Pashtu Numerals Recognition through Convolutional Neural Networks

In the proposed paper we introduce a new Pashtu numerals dataset having handwritten scanned images. We make the dataset publically available for scientific and research use. Pashtu language is used by more than fifty million people both for oral and written communication, but still no efforts are devoted to the Optical Character Recognition (OCR) system for Pashtu language. We introduce a new method for handwritten numerals recognition of Pashtu language through the deep learning based models. We use convolutional neural networks (CNNs) both for features extraction and classification tasks. We assess the performance of the proposed CNNs based model and obtained recognition accuracy of 91.45%.

2019 ◽  
Vol 8 (3) ◽  
pp. 6873-6880

Palm leaf manuscripts has been one of the ancient writing methods but the palm leaf manuscripts content requires to be inscribed in a new set of leaves. This study has provided a solution to save the contents in palm leaf manuscripts by recognizing the handwritten Tamil characters in manuscripts and storing them digitally. Character recognition is one of the most essential fields of pattern recognition and image processing. Generally Optical character recognition is the method of e-translation of typewritten text or handwritten images into machine editable text. The handwritten Tamil character recognition has been one of the challenging and active areas of research in the field of pattern recognition and image processing. In this study a trial was made to identify Tamil handwritten characters without extraction of feature using convolutional neural networks. This study uses convolutional neural networks for recognizing and classifying the Tamil palm leaf manuscripts of characters from separated character images. The convolutional neural network is a deep learning approach for which it does not need to retrieve features and also a rapid approach for character recognition. In the proposed system every character is expanded to needed pixels. The expanded characters have predetermined pixels and these pixels are considered as characteristics for neural network training. The trained network is employed for recognition and classification. Convolutional Network Model development contains convolution layer, Relu layer, pooling layer, fully connected layer. The ancient Tamil character dataset of 60 varying class has been created. The outputs reveal that the proposed approach generates better rates of recognition than that of schemes based on feature extraction for handwritten character recognition. The accuracy of the proposed approach has been identified as 97% which shows that the proposed approach is effective in terms of recognition of ancient characters.


Sensors ◽  
2020 ◽  
Vol 20 (12) ◽  
pp. 3344 ◽  
Author(s):  
Savita Ahlawat ◽  
Amit Choudhary ◽  
Anand Nayyar ◽  
Saurabh Singh ◽  
Byungun Yoon

Traditional systems of handwriting recognition have relied on handcrafted features and a large amount of prior knowledge. Training an Optical character recognition (OCR) system based on these prerequisites is a challenging task. Research in the handwriting recognition field is focused around deep learning techniques and has achieved breakthrough performance in the last few years. Still, the rapid growth in the amount of handwritten data and the availability of massive processing power demands improvement in recognition accuracy and deserves further investigation. Convolutional neural networks (CNNs) are very effective in perceiving the structure of handwritten characters/words in ways that help in automatic extraction of distinct features and make CNN the most suitable approach for solving handwriting recognition problems. Our aim in the proposed work is to explore the various design options like number of layers, stride size, receptive field, kernel size, padding and dilution for CNN-based handwritten digit recognition. In addition, we aim to evaluate various SGD optimization algorithms in improving the performance of handwritten digit recognition. A network’s recognition accuracy increases by incorporating ensemble architecture. Here, our objective is to achieve comparable accuracy by using a pure CNN architecture without ensemble architecture, as ensemble architectures introduce increased computational cost and high testing complexity. Thus, a CNN architecture is proposed in order to achieve accuracy even better than that of ensemble architectures, along with reduced operational complexity and cost. Moreover, we also present an appropriate combination of learning parameters in designing a CNN that leads us to reach a new absolute record in classifying MNIST handwritten digits. We carried out extensive experiments and achieved a recognition accuracy of 99.87% for a MNIST dataset.


2020 ◽  
Vol 8 (6) ◽  
pp. 5126-5132

The necessity of credit cards and online payment techniques become extremely popular and simple to perform because of easy and safe money handling techniques. The usage of the ATM by visually challenged people is a problem. Though there are certain features for the visually challenged users like speech instructions, there is no conformity of the amount entered or of that transacted. As a result, these people have no security, ease or comfort during the ATM transactions. So, there is a need to provide a method for the visually challenged people to effortlessly perform ATM transactions with better security. Our proposed system designed a device that can act as an aid for the visually challenged to transact in the ATM. The devised system recognizes the amount to be transacted as entered on the screen using Optical Character Recognition (OCR) and conveys it to the user via speech. After transaction, the banknotes are recognized by the system using image recognition through vital banknote feature extraction and the verification is provided regarding the amount transacted and the intended amount.


2021 ◽  
pp. 894-911
Author(s):  
Bhavesh Kataria, Dr. Harikrishna B. Jethva

India's constitution has 22 languages written in 17 different scripts. These materials have a limited lifespan, and as generations pass, these materials deteriorate, and the vital knowledge is lost. This work uses digital texts to convey information to future generations. Optical Character Recognition (OCR) helps extract information from scanned manuscripts (printed text). This paper proposes a simple and effective solution of optical character recognition (OCR) Sanskrit Character from text document images using long short-term memory (LSTM) and neural networks of Sanskrit Characters. Existing methods focuses only upon the single touching characters. But our main focus is to design a robust method using Bidirectional Long Short-Term Memory (BLSTM) architecture for overlapping lines, touching characters in middle and upper zone and half character which would increase the accuracy of the present OCR system for recognition of poorly maintained Sanskrit literature.


Sign in / Sign up

Export Citation Format

Share Document