THE TECHNIQUE OF EXTRACTION TEXT AREAS ON SCANNED DOCUMENT IMAGE USING LINEAR FILTRATION

2019 ◽  
Vol 2 (3) ◽  
pp. 206-215
Author(s):  
Alesya Ishchenko ◽  
Alexandr Nesteryuk ◽  
Marina Polyakova
2020 ◽  
Vol 2020 (9) ◽  
pp. 323-1-323-8
Author(s):  
Litao Hu ◽  
Zhenhua Hu ◽  
Peter Bauer ◽  
Todd J. Harris ◽  
Jan P. Allebach

Image quality assessment has been a very active research area in the field of image processing, and there have been numerous methods proposed. However, most of the existing methods focus on digital images that only or mainly contain pictures or photos taken by digital cameras. Traditional approaches evaluate an input image as a whole and try to estimate a quality score for the image, in order to give viewers an idea of how “good” the image looks. In this paper, we mainly focus on the quality evaluation of contents of symbols like texts, bar-codes, QR-codes, lines, and hand-writings in target images. Estimating a quality score for this kind of information can be based on whether or not it is readable by a human, or recognizable by a decoder. Moreover, we mainly study the viewing quality of the scanned document of a printed image. For this purpose, we propose a novel image quality assessment algorithm that is able to determine the readability of a scanned document or regions in a scanned document. Experimental results on some testing images demonstrate the effectiveness of our method.


2020 ◽  
Vol 64 (3) ◽  
pp. 30401-1-30401-14 ◽  
Author(s):  
Chih-Hsien Hsia ◽  
Ting-Yu Lin ◽  
Jen-Shiun Chiang

Abstract In recent years, the preservation of handwritten historical documents and scripts archived by digitized images has been gradually emphasized. However, the selection of different thicknesses of the paper for printing or writing is likely to make the content of the back page seep into the front page. In order to solve this, a cost-efficient document image system is proposed. In this system, the authors use Adaptive Directional Lifting-Based Discrete Wavelet Transform to transform image data from spatial domain to frequency domain and perform on high and low frequencies, respectively. For low frequencies, the authors use local threshold to remove most background information. For high frequencies, they use modified Least Mean Square training algorithm to produce a unique weighted mask and perform convolution on original frequency, respectively. Afterward, Inverse Adaptive Directional Lifting-Based Discrete Wavelet Transform is performed to reconstruct the four subband images to a resulting image with original size. Finally, a global binarization method, Otsu’s method, is applied to transform a gray scale image to a binary image as the output result. The results show that the difference in operation time of this work between a personal computer (PC) and Raspberry Pi is little. Therefore, the proposed cost-efficient document image system which performed on Raspberry Pi embedded platform has the same performance and obtains the same results as those performed on a PC.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Wei Xiong ◽  
Lei Zhou ◽  
Ling Yue ◽  
Lirong Li ◽  
Song Wang

AbstractBinarization plays an important role in document analysis and recognition (DAR) systems. In this paper, we present our winning algorithm in ICFHR 2018 competition on handwritten document image binarization (H-DIBCO 2018), which is based on background estimation and energy minimization. First, we adopt mathematical morphological operations to estimate and compensate the document background. It uses a disk-shaped structuring element, whose radius is computed by the minimum entropy-based stroke width transform (SWT). Second, we perform Laplacian energy-based segmentation on the compensated document images. Finally, we implement post-processing to preserve text stroke connectivity and eliminate isolated noise. Experimental results indicate that the proposed method outperforms other state-of-the-art techniques on several public available benchmark datasets.


2021 ◽  
Author(s):  
Li Liu ◽  
Zhiyu Wang ◽  
Taorong Qiu ◽  
Qiu Chen ◽  
Yue Lu ◽  
...  

2021 ◽  
pp. 108099
Author(s):  
Francisco J. Castellanos ◽  
Antonio-Javier Gallego ◽  
Jorge Calvo-Zaragoza

2019 ◽  
Vol 9 (2) ◽  
pp. 236 ◽  
Author(s):  
Saad Ahmed ◽  
Saeeda Naz ◽  
Muhammad Razzak ◽  
Rubiyah Yusof

This paper presents a comprehensive survey on Arabic cursive scene text recognition. The recent years’ publications in this field have witnessed the interest shift of document image analysis researchers from recognition of optical characters to recognition of characters appearing in natural images. Scene text recognition is a challenging problem due to the text having variations in font styles, size, alignment, orientation, reflection, illumination change, blurriness and complex background. Among cursive scripts, Arabic scene text recognition is contemplated as a more challenging problem due to joined writing, same character variations, a large number of ligatures, the number of baselines, etc. Surveys on the Latin and Chinese script-based scene text recognition system can be found, but the Arabic like scene text recognition problem is yet to be addressed in detail. In this manuscript, a description is provided to highlight some of the latest techniques presented for text classification. The presented techniques following a deep learning architecture are equally suitable for the development of Arabic cursive scene text recognition systems. The issues pertaining to text localization and feature extraction are also presented. Moreover, this article emphasizes the importance of having benchmark cursive scene text dataset. Based on the discussion, future directions are outlined, some of which may provide insight about cursive scene text to researchers.


Sign in / Sign up

Export Citation Format

Share Document