numeral recognition
Recently Published Documents


TOTAL DOCUMENTS

324
(FIVE YEARS 44)

H-INDEX

24
(FIVE YEARS 2)

Automatic Character Recognition for the handwritten Indic script has listed up as most the challenging area for research in the field of pattern recognition. Although a great amount of research work has been reported, but all the state-of-art methods are limited with optimal features. This article aims to suggest a well-defined recognition model which harnessed upon handwritten Odia characters and numerals by implementing a novel process of decomposition in terms of 3rd level Fast Discrete Curvelet Transform (FDCT) to get higher dimension feature vector. After that, Kernel-Principal Component Analysis (K-PCA) considered to obtained optimal features from FDCT feature. Finally, the classification is performed by using Probabilistic Neural Network (PNN) on handwritten Odia character and numeral dataset from both NIT Rourkela and IIT Bhubaneswar. The outcome of proposed scheme outperforms better as compared to existing model with optimized Gaussian kernel-based feature set.


2021 ◽  
Vol 40 (3) ◽  
pp. 181-191
Author(s):  
Gopal Dadarao Upadhye ◽  
Uday V. Kulkarni ◽  
Deepak T. Mane

Handwritten numeral recognition has been an important area in the domain of pattern classification. The task becomes even more daunting when working with non-Roman numerals. While convolutional neural networks are the preferred choice for modeling the image data, the conception of techniques to obtain faster convergence and accurate results still poses an enigma to the researchers. In this paper, we present new methods for the initialization and the optimization of the traditional convolutional neural network architecture to obtain better results for Kannada numeral images. Specifically, we propose two different methods- an encoderdecoder setup for unsupervised training and weight initialization, and a particle swarm optimization strategy for choosing the ideal architecture configuration of the CNN. Unsupervised initial training of the architecture helps for a faster convergence owing to more task-suited weights as compared to random initialization while the optimization strategy is helpful to reduce the time required for the manual iterative approach of architecture selection. The proposed setup is trained on varying handwritten Kannada numerals. The proposed approaches are evaluated on two different datasets: a standard Dig-MNIST dataset and a custom-built dataset. Significant improvements across multiple performance metrics are observed in our proposed system over the traditional CNN training setup. The improvement in results makes a strong case for relying on such methods for faster and more accurate training and inference of digit classification, especially when working in the absence of transfer learning.


Author(s):  
Mamta Bisht ◽  
Richa Gupta

Script recognition is the first necessary preliminary step for text recognition. In the deep learning era, for this task two essential requirements are the availability of a large labeled dataset for training and computational resources to train models. But if we have limitations on these requirements then we need to think of alternative methods. This provides an impetus to explore the field of transfer learning, in which the previously trained model knowledge established in the benchmark dataset can be reused in another smaller dataset for another task, thus saving computational power as it requires to train only less number of parameters from the total parameters in the model. Here we study two pre-trained models and fine-tune them for script classification tasks. Firstly, the VGG-16 pre-trained model is fine-tuned for publically available CVSI-15 and MLe2e datasets for script recognition. Secondly, a well-performed model on Devanagari handwritten characters dataset has been adopted and fine-tuned for the Kaggle Devanagari numeral dataset for numeral recognition. The performance of proposed fine-tune models is related to the nature of the target dataset as similar or dissimilar from the original dataset and it has been analyzed with widely used optimizers.


Author(s):  
Harsha Halesh Patel ◽  
Hindu Shree C T ◽  
Jayanth S ◽  
Keerti D Kulkarni ◽  
Pushpa Mala S

Author(s):  
Aradhya Saini ◽  
Sandeep Daniel ◽  
Satyam Saini ◽  
Ankush Mittal

Sign in / Sign up

Export Citation Format

Share Document