TenzinNet for handwritten Tibetan numeral recognition

Author(s):  
Tenzin Kaldan ◽  
Adiyillam Vijayalakshmi
Keyword(s):  
Author(s):  
Aradhya Saini ◽  
Sandeep Daniel ◽  
Satyam Saini ◽  
Ankush Mittal

Author(s):  
Farhad Mohamad Kazemi ◽  
Jalaleddin Izadian ◽  
Reihane Moravejian ◽  
Ehsan Mohamad Kazemi

2013 ◽  
Vol 83 (10) ◽  
pp. 36-43
Author(s):  
Mahmood KJasim ◽  
Anwar M Al-Saleh ◽  
Alaa Aljanaby

Author(s):  
Mamta Bisht ◽  
Richa Gupta

Script recognition is the first necessary preliminary step for text recognition. In the deep learning era, for this task two essential requirements are the availability of a large labeled dataset for training and computational resources to train models. But if we have limitations on these requirements then we need to think of alternative methods. This provides an impetus to explore the field of transfer learning, in which the previously trained model knowledge established in the benchmark dataset can be reused in another smaller dataset for another task, thus saving computational power as it requires to train only less number of parameters from the total parameters in the model. Here we study two pre-trained models and fine-tune them for script classification tasks. Firstly, the VGG-16 pre-trained model is fine-tuned for publically available CVSI-15 and MLe2e datasets for script recognition. Secondly, a well-performed model on Devanagari handwritten characters dataset has been adopted and fine-tuned for the Kaggle Devanagari numeral dataset for numeral recognition. The performance of proposed fine-tune models is related to the nature of the target dataset as similar or dissimilar from the original dataset and it has been analyzed with widely used optimizers.


Sign in / Sign up

Export Citation Format

Share Document