A new method for detection and prediction of occluded text in natural scene images

Author(s):  
Ayush Mittal ◽  
Palaiahnakote Shivakumara ◽  
Umapada Pal ◽  
Tong Lu ◽  
Michael Blumenstein
Author(s):  
Houda Gaddour ◽  
Slim Kanoun ◽  
Nicole Vincent

Text in scene images can provide useful and vital information for content-based image analysis. Therefore, text detection and script identification in images are an important task. In this paper, we propose a new method for text detection in natural scene images, particularly for Arabic text, based on a bottom-up approach where four principal steps can be highlighted. The detection of extremely stable and homogeneous regions of interest (ROIs) is based on the Color Stability and Homogeneity Regions (CSHR) proposed technique. These regions are then labeled as textual or non-textual ROI. This identification is based on a structural approach. The textual ROIs are grouped to constitute zones according to spatial relations between them. Finally, the textual or non-textual nature of the constituted zones is refined. This last identification is based on handcrafted features and on features built from a Convolutional Neural Network (CNN) after learning. The proposed method was evaluated on the databases used for text detection in natural scene images: the competitions organized in 2017 edition of the International Conference on Document Analysis and Recognition (ICDAR2017), the Urdu-text database and our Natural Scene Image Database for Arabic Text detection (NSIDAT) database. The obtained experimental results seem to be interesting.


2019 ◽  
pp. 30-33
Author(s):  
U. R. Khamdamov ◽  
M. N. Mukhiddinov ◽  
A. O. Mukhamedaminov ◽  
O. N. Djuraev

Author(s):  
Pushpendra Singh ◽  
P.N. Hrisheekesha ◽  
Vinai Kumar Singh

Content based image retrieval (CBIR) is one of the field for information retrieval where similar images are retrieved from database based on the various image descriptive parameters. The image descriptor vector is used by machine learning based systems to store, learn and template matching. These feature descriptor vectors locally or globally demonstrate the visual content present in an image using texture, color, shape, and other information. In past, several algorithms were proposed to fetch the variety of contents from an image based on which the image is retrieved from database. But, the literature suggests that the precision and recall for the gained results using single content descriptor is not significant. The main vision of this paper is to categorize and evaluate those algorithms, which were proposed in the interval of last 10 years. In addition, experiment is performed using a hybrid content descriptors methodology that helps to gain the significant results as compared with state-of-art algorithms. The hybrid methodology decreases the error rate and improves the precision and recall for large natural scene images dataset having more than 20 classes.


2021 ◽  
Vol 40 (1) ◽  
pp. 551-563
Author(s):  
Liqiong Lu ◽  
Dong Wu ◽  
Ziwei Tang ◽  
Yaohua Yi ◽  
Faliang Huang

This paper focuses on script identification in natural scene images. Traditional CNNs (Convolution Neural Networks) cannot solve this problem perfectly for two reasons: one is the arbitrary aspect ratios of scene images which bring much difficulty to traditional CNNs with a fixed size image as the input. And the other is that some scripts with minor differences are easily confused because they share a subset of characters with the same shapes. We propose a novel approach combing Score CNN, Attention CNN and patches. Attention CNN is utilized to determine whether a patch is a discriminative patch and calculate the contribution weight of the discriminative patch to script identification of the whole image. Score CNN uses a discriminative patch as input and predict the score of each script type. Firstly patches with the same size are extracted from the scene images. Secondly these patches are used as inputs to Score CNN and Attention CNN to train two patch-level classifiers. Finally, the results of multiple discriminative patches extracted from the same image via the above two classifiers are fused to obtain the script type of this image. Using patches with the same size as inputs to CNN can avoid the problems caused by arbitrary aspect ratios of scene images. The trained classifiers can mine discriminative patches to accurately identify some confusing scripts. The experimental results show the good performance of our approach on four public datasets.


Author(s):  
Sankirti Sandeep Shiravale ◽  
R. Jayadevan ◽  
Sanjeev S. Sannakki

Text present in a camera captured scene images is semantically rich and can be used for image understanding. Automatic detection, extraction, and recognition of text are crucial in image understanding applications. Text detection from natural scene images is a tedious task due to complex background, uneven light conditions, multi-coloured and multi-sized font. Two techniques, namely ‘edge detection' and ‘colour-based clustering', are combined in this paper to detect text in scene images. Region properties are used for elimination of falsely generated annotations. A dataset of 1250 images is created and used for experimentation. Experimental results show that the combined approach performs better than the individual approaches.


Author(s):  
Yash Patel ◽  
Lluis Gomez ◽  
Marçal Rusiñol ◽  
Dimosthenis Karatzas

Sign in / Sign up

Export Citation Format

Share Document