document binarization
Recently Published Documents


TOTAL DOCUMENTS

40
(FIVE YEARS 12)

H-INDEX

10
(FIVE YEARS 2)

2021 ◽  
pp. 1-14
Author(s):  
R.L. Jyothi ◽  
M. Abdul Rahiman

Binarization is the most important stage in historical document image processing. Efficient working of character and word recognition algorithms depend on effective segmentation methods. Segmentation algorithms in turn depend on images free of noises and degradations. Most of these historical documents are illegible with degradations like bleeding through degradation, faded ink or faint characters, uneven illumination, contrast variation, etc. For effective processing of these document images, efficient binarization algorithms should be devised. Here a simple modified version of the Convolutional Neural Network (CNN) is proposed for historical document binarization. AOD-Net architecture for generating dehazed images from hazed images is modified to create the proposed network.The new CNN model is created by incorporating Difference of Concatenation layer (DOC), Enhancement layer (EN) and Thresholding layer into AOD-Net to make it suitable for binarization of highly degraded document images. The DOC layer and EN layer work effectively in solving degradation that exists in the form of low pass noises. The complexity of working of the proposed model is reduced by decreasing the number of layers and by introducing filters in convolution layers that work with low inter-pixel dependency. This modified version of CNN works effectively with a variety of highly degraded documents when tested with the benchmark historical datasets. The main highlight of the proposed network is that it works efficiently in a generalized manner for any type of document images without further parameter tuning. Another important highlight of this method is that it can handle most of the degradation categories present in document images. In this work, the performance of the proposed model is compared with Otsu, Sauvola, and three recent Deep Learning-based models.


Author(s):  
Amandeep Kumar ◽  
Shuvozit Ghose ◽  
Pinaki Nath Chowdhury ◽  
Partha Pratim Roy ◽  
Umapada Pal

Author(s):  
Elena E. Limonova ◽  
Dmitry P. Nikolaev ◽  
Vladimir V. Arlazarov

In this paper, is addressed the issues of uneven lighting and complex background of the camera-based document image using hybridization of new split and merge method. The input image is divided into four uniform size sub-images, and CLAHE enhancement technique was applied to all four sub-images to rectify noise amplification in each part of input image and then adaptive document binarization operation was used on four sub-images. Then, all four sub section images are merged to get the binarized image with elimination of noises, uneven illumination. Encouraging results are obtained by this method as compared on to other existing methods of Sauvola, Feng and Kasar found in the literature. The comparative analysis is made on collected datasets, as the standard datasets are not available.


2019 ◽  
Vol 43 (5) ◽  
pp. 825-832 ◽  
Author(s):  
P.V. Bezmaternykh ◽  
D.A. Ilin ◽  
D.P. Nikolaev

Image binarization is still a challenging task in a variety of applications. In particular, Document Image Binarization Contest (DIBCO) is organized regularly to track the state-of-the-art techniques for the historical document binarization. In this work we present a binarization method that was ranked first in the DIBCO`17 contest. It is a convolutional neural network (CNN) based method which uses U-Net architecture, originally designed for biomedical image segmentation. We describe our approach to training data preparation and contest ground truth examination and provide multiple insights on its construction (so called hacking). It led to more accurate historical document binarization problem statement with respect to the challenges one could face in the open access datasets. A docker container with the final network along with all the supplementary data we used in the training process has been published on Github.


2019 ◽  
Vol 5 (4) ◽  
pp. 48 ◽  
Author(s):  
Sulaiman ◽  
Omar ◽  
Nasrudin

In this era of digitization, most hardcopy documents are being transformed into digital formats. In the process of transformation, large quantities of documents are stored and preserved through electronic scanning. These documents are available from various sources such as ancient documentation, old legal records, medical reports, music scores, palm leaf, and reports on security-related issues. In particular, ancient and historical documents are hard to read due to their degradation in terms of low contrast and existence of corrupted artefacts. In recent times, degraded document binarization has been studied widely and several approaches were developed to deal with issues and challenges in document binarization. In this paper, a comprehensive review is conducted on the issues and challenges faced during the image binarization process, followed by insights on various methods used for image binarization. This paper also discusses the advanced methods used for the enhancement of degraded documents that improves the quality of documents during the binarization process. Further discussions are made on the effectiveness and robustness of existing methods, and there is still a scope to develop a hybrid approach that can deal with degraded document binarization more effectively.


Sign in / Sign up

Export Citation Format

Share Document