Devanagari and Bangla Text Extraction from Natural Scene Images

Author(s):  
Ujjwal Bhattacharya ◽  
Swapan Kumar Parui ◽  
Srikanta Mondal
2019 ◽  
Vol 2019 (8) ◽  
pp. 5397-5406
Author(s):  
Angia Venkatesan Karpagam ◽  
Mohan Manikandan

2021 ◽  
Vol 7 ◽  
pp. e717
Author(s):  
Hazrat Ali ◽  
Khalid Iqbal ◽  
Ghulam Mujtaba ◽  
Ahmad Fayyaz ◽  
Mohammad Farhad Bulbul ◽  
...  

Text detection in natural scene images for content analysis is an interesting task. The research community has seen some great developments for English/Mandarin text detection. However, Urdu text extraction in natural scene images is a task not well addressed. In this work, firstly, a new dataset is introduced for Urdu text in natural scene images. The dataset comprises of 500 standalone images acquired from real scenes. Secondly, the channel enhanced Maximally Stable Extremal Region (MSER) method is applied to extract Urdu text regions as candidates in an image. Two-stage filtering mechanism is applied to eliminate non-candidate regions. In the first stage, text and noise are classified based on their geometric properties. In the second stage, a support vector machine classifier is trained to discard non-text candidate regions. After this, text candidate regions are linked using centroid-based vertical and horizontal distances. Text lines are further analyzed by a different classifier based on HOG features to remove non-text regions. Extensive experimentation is performed on the locally developed dataset to evaluate the performance. The experimental results show good performance on test set images. The dataset will be made available for research use. To the best of our knowledge, the work is the first of its kind for the Urdu language and would provide a good dataset for free research use and serve as a baseline performance on the task of Urdu text extraction.


Author(s):  
Vaibhav Goel ◽  
◽  
Vaibhav Kumar ◽  
Amandeep Singh Jaggi ◽  
Preeti Nagrath

Of late, the rapid development in the technology and multimedia capability in digital cameras and mobile devices has led to ever increasing number of images or multi-media data to the digital world. Particularly, in natural scene images, the text content provides explicit information to understand the semantics of images. Therefore, a system developed for extracting and recognizing texts accurately from natural scene images, in real-time, has significant relevance to numerous applications such as, assistive technology for people with vision impairment, tourist with language barrier, vehicle number plate detection, street signs, advertisement bill boards, robotics, etc. The extraction of the texts from natural scene images is a formidable task due to large variations in character fonts, styles, sizes, text orientations, presence of complex backgrounds and varying light conditions. The main focus of this research paper is to propose a novel hybrid approach for automatic detection, localization, extraction and recognition of text in natural scene images with cluttered background. Firstly, image regions with text are detected using edge features (GLCM) extracted from Contourlet transformed image and SVM (Support Vector Machine) classifier. Secondly, horizontal projection is applied on text regions for segmenting lines and vertical projection is applied on each text line for segmenting characters. The proposed method for text extraction has produced the precision, recall, F-Score and accuracy of 98.50%, 90.85.62%, 95.00%, and 89.90%, respectively. And, these results prove that the proposed method is efficient. Further, the so extracted characters are processed for recognition using contourlet transform and Probabilistic Neural Network (PNN) classifier. The computed features are moment invariants. Only the English script is considered for the experimentation. The proposed character recognition method has accuracy of 79.07%, which is higher in comparison to accuracy of 75.15% obtained by KNN (K-Nearest Neighbors) classifier


Sign in / Sign up

Export Citation Format

Share Document