scholarly journals Document Image Binarization using Image Segmentation Technique

Author(s):  
Mr. Aniket Pagare

Segmentation of text from badly degraded document images is an extremely difficult assignment because of the high inter/Intra variety between the record foundation and the frontal area text of various report pictures. Picture preparing and design acknowledgment algorithms set aside more effort for execution on a solitary center processor. Designs Preparing Unit (GPU) is more mainstream these days because of its speed, programmability, minimal expense and more inbuilt execution centers in it. The primary objective of this exploration work is to make binarization quicker for acknowledgment of a huge number of corrupted report pictures on GPU. In this framework, we give another picture division calculation that every pixel in the picture has its own limit proposed. We are accomplishing equal work on a window of m*n size and separate article pixel of text stroke of that window. The archive text is additionally sectioned by a nearby edge that is assessed dependent on the forces of identified content stroke edge pixels inside a nearby window.

2021 ◽  
pp. 1-14
Author(s):  
R.L. Jyothi ◽  
M. Abdul Rahiman

Binarization is the most important stage in historical document image processing. Efficient working of character and word recognition algorithms depend on effective segmentation methods. Segmentation algorithms in turn depend on images free of noises and degradations. Most of these historical documents are illegible with degradations like bleeding through degradation, faded ink or faint characters, uneven illumination, contrast variation, etc. For effective processing of these document images, efficient binarization algorithms should be devised. Here a simple modified version of the Convolutional Neural Network (CNN) is proposed for historical document binarization. AOD-Net architecture for generating dehazed images from hazed images is modified to create the proposed network.The new CNN model is created by incorporating Difference of Concatenation layer (DOC), Enhancement layer (EN) and Thresholding layer into AOD-Net to make it suitable for binarization of highly degraded document images. The DOC layer and EN layer work effectively in solving degradation that exists in the form of low pass noises. The complexity of working of the proposed model is reduced by decreasing the number of layers and by introducing filters in convolution layers that work with low inter-pixel dependency. This modified version of CNN works effectively with a variety of highly degraded documents when tested with the benchmark historical datasets. The main highlight of the proposed network is that it works efficiently in a generalized manner for any type of document images without further parameter tuning. Another important highlight of this method is that it can handle most of the degradation categories present in document images. In this work, the performance of the proposed model is compared with Otsu, Sauvola, and three recent Deep Learning-based models.


Sign in / Sign up

Export Citation Format

Share Document