scholarly journals An enhanced binarization framework for degraded historical document images

2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Wei Xiong ◽  
Lei Zhou ◽  
Ling Yue ◽  
Lirong Li ◽  
Song Wang

AbstractBinarization plays an important role in document analysis and recognition (DAR) systems. In this paper, we present our winning algorithm in ICFHR 2018 competition on handwritten document image binarization (H-DIBCO 2018), which is based on background estimation and energy minimization. First, we adopt mathematical morphological operations to estimate and compensate the document background. It uses a disk-shaped structuring element, whose radius is computed by the minimum entropy-based stroke width transform (SWT). Second, we perform Laplacian energy-based segmentation on the compensated document images. Finally, we implement post-processing to preserve text stroke connectivity and eliminate isolated noise. Experimental results indicate that the proposed method outperforms other state-of-the-art techniques on several public available benchmark datasets.

2021 ◽  
pp. 1-14
Author(s):  
R.L. Jyothi ◽  
M. Abdul Rahiman

Binarization is the most important stage in historical document image processing. Efficient working of character and word recognition algorithms depend on effective segmentation methods. Segmentation algorithms in turn depend on images free of noises and degradations. Most of these historical documents are illegible with degradations like bleeding through degradation, faded ink or faint characters, uneven illumination, contrast variation, etc. For effective processing of these document images, efficient binarization algorithms should be devised. Here a simple modified version of the Convolutional Neural Network (CNN) is proposed for historical document binarization. AOD-Net architecture for generating dehazed images from hazed images is modified to create the proposed network.The new CNN model is created by incorporating Difference of Concatenation layer (DOC), Enhancement layer (EN) and Thresholding layer into AOD-Net to make it suitable for binarization of highly degraded document images. The DOC layer and EN layer work effectively in solving degradation that exists in the form of low pass noises. The complexity of working of the proposed model is reduced by decreasing the number of layers and by introducing filters in convolution layers that work with low inter-pixel dependency. This modified version of CNN works effectively with a variety of highly degraded documents when tested with the benchmark historical datasets. The main highlight of the proposed network is that it works efficiently in a generalized manner for any type of document images without further parameter tuning. Another important highlight of this method is that it can handle most of the degradation categories present in document images. In this work, the performance of the proposed model is compared with Otsu, Sauvola, and three recent Deep Learning-based models.


2005 ◽  
Vol 05 (02) ◽  
pp. 281-309
Author(s):  
ZHERU CHI ◽  
QING WANG

Binarization of gray scale document images is one of the most important steps in automatic document image processing. In this paper, we present a two-stage document image binarization approach, which includes a top-down region-based binarization at the first stage and a neural network based binarization technique for the problematic blocks at the second stage after a feedback checking. Our two-stage approach is particularly effective for binarizing text images of highlighted or marked text. The region-based binarization method is fast and suitable for processing large document images. However, the block effect and regional edge noise are two unavoidable problems resulting in poor character segmentation and recognition. The neural network based classifier can achieve good performance in two-class classification problem such as the binarization of gray level document images. However, it is computationally costly. In our two-stage binarization approach, the feedback criteria are employed to keep the well binarized blocks from the first stage binarization and to re-binarize the problematic blocks at the second stage using the neural network binarizer to improve the character segmentation quality. Experimental results on a number of document images show that our two-stage binarization approach performs better than the single-stage binarization techniques tested in terms of character segmentation quality and computational cost.


Sign in / Sign up

Export Citation Format

Share Document