Decorated Character Recognition Employing Modified SOM Matching

2011 ◽  
Vol 103 ◽  
pp. 649-657
Author(s):  
Tsukasa Masuhara ◽  
Hideaki Kawano ◽  
Hideaki Orii ◽  
Hiroshi Maeda

Character recognition is a classical issue which has been devoted by a lot of researchers.Making character recognition system more widely available in natural scene images might open upinteresting possibility to use as an input interface of characters and an annotation method for images.Nevertheless, it is still difficult to recognize all sorts of fonts including decorated characters such ascharacters depicted on signboards. The decorated characters are constructed by using some specialtechniques for attracting viewers' attentions. Therefore, it is hard to obtain good recognition results bythe existingOCRs. In this paper,we propose a newcharacter recognition systemusing SOM. The SOMis employed to extract an essential structure concerning the topology from a character. The extractedtopological structure from each character is used to matching and the recognition is performed on thebasis of the topological matching. Experimental results show the effectiveness of the proposed methodin most forms of characters.

2021 ◽  
Vol 40 (1) ◽  
pp. 551-563
Author(s):  
Liqiong Lu ◽  
Dong Wu ◽  
Ziwei Tang ◽  
Yaohua Yi ◽  
Faliang Huang

This paper focuses on script identification in natural scene images. Traditional CNNs (Convolution Neural Networks) cannot solve this problem perfectly for two reasons: one is the arbitrary aspect ratios of scene images which bring much difficulty to traditional CNNs with a fixed size image as the input. And the other is that some scripts with minor differences are easily confused because they share a subset of characters with the same shapes. We propose a novel approach combing Score CNN, Attention CNN and patches. Attention CNN is utilized to determine whether a patch is a discriminative patch and calculate the contribution weight of the discriminative patch to script identification of the whole image. Score CNN uses a discriminative patch as input and predict the score of each script type. Firstly patches with the same size are extracted from the scene images. Secondly these patches are used as inputs to Score CNN and Attention CNN to train two patch-level classifiers. Finally, the results of multiple discriminative patches extracted from the same image via the above two classifiers are fused to obtain the script type of this image. Using patches with the same size as inputs to CNN can avoid the problems caused by arbitrary aspect ratios of scene images. The trained classifiers can mine discriminative patches to accurately identify some confusing scripts. The experimental results show the good performance of our approach on four public datasets.


Author(s):  
Sankirti Sandeep Shiravale ◽  
R. Jayadevan ◽  
Sanjeev S. Sannakki

Text present in a camera captured scene images is semantically rich and can be used for image understanding. Automatic detection, extraction, and recognition of text are crucial in image understanding applications. Text detection from natural scene images is a tedious task due to complex background, uneven light conditions, multi-coloured and multi-sized font. Two techniques, namely ‘edge detection' and ‘colour-based clustering', are combined in this paper to detect text in scene images. Region properties are used for elimination of falsely generated annotations. A dataset of 1250 images is created and used for experimentation. Experimental results show that the combined approach performs better than the individual approaches.


Author(s):  
Youssef Ouadid ◽  
Abderrahmane Elbalaoui ◽  
Mehdi Boutaounte ◽  
Mohamed Fakir ◽  
Brahim Minaoui

<p>In this paper, a graph based handwritten Tifinagh character recognition system is presented. In preprocessing Zhang Suen algorithm is enhanced. In features extraction, a novel key point extraction algorithm is presented. Images are then represented by adjacency matrices defining graphs where nodes represent feature points extracted by a novel algorithm. These graphs are classified using a graph matching method. Experimental results are obtained using two databases to test the effectiveness. The system shows good results in terms of recognition rate.</p>


2016 ◽  
Vol 6 (Special Issue) ◽  
pp. 109-113 ◽  
Author(s):  
Shivananda V. Seeri ◽  
J.D. Pujari ◽  
P.S. Hiremath

Author(s):  
O. Akbani ◽  
A. Gokrani ◽  
M. Quresh ◽  
Furqan M. Khan ◽  
Sadaf I. Behlim ◽  
...  

2014 ◽  
Vol 2014 ◽  
pp. 1-8
Author(s):  
Chunmei Liu

Degradation diagnosis plays an important role for degraded character processing, which can tell the recognition difficulty of a given degraded character. In this paper, we present a framework for automated degraded character recognition system by statistical syntactic approach using 3D primitive symbol, which is integrated by degradation diagnosis to provide accurate and reliable recognition results. Our contribution is to design the framework to build the character recognition submodels corresponding to degradation subject to camera vibration or out of focus. In each character recognition submodel, statistical syntactic approach using 3D primitive symbol is proposed to improve degraded character recognition performance. In the experiments, we show attractive experimental results, highlighting the system efficiency and recognition performance by statistical syntactic approach using 3D primitive symbol on the degraded character dataset.


Sign in / Sign up

Export Citation Format

Share Document