A Unified Algorithm for Identification of Various Tabular Structures from Document Images

Author(s):  
Sekhar Mandal ◽  
Amit K. Das ◽  
Partha Bhowmick ◽  
Bhabatosh Chanda

This paper presents a unified algorithm for segmentation and identification of various tabular structures from document page images. Such tabular structures include conventional tables and displayed math-zones, as well as Table of Contents (TOC) and Index pages. After analyzing the page composition, the algorithm initially classifies the input set of document pages into tabular and non-tabular pages. A tabular page contains at least one of the tabular structures, whereas a non-tabular page does not contain any. The approach is unified in the sense that it is able to identify all tabular structures from a tabular page, which leads to a considerable simplification of document image segmentation in a novel manner. Such unification also results in speeding up the segmentation process, because the existing methodologies produce time-consuming solutions for treating different tabular structures as separate physical entities. Distinguishing features of different kinds of tabular structures have been used in stages in order to ensure the simplicity and efficiency of the algorithm and demonstrated by exhaustive experimental results.

Author(s):  
Omar Boudraa ◽  
Walid Khaled Hidouci ◽  
Dominique Michelucci

Segmentation is one of the critical steps in historical document image analysis systems that determines the quality of the search, understanding, recognition and interpretation processes. It allows isolating the objects to be considered and separating the regions of interest (paragraphs, lines, words and characters) from other entities (figures, graphs, tables, etc.). This stage follows the thresholding, which aims to improve the quality of the document and to extract its background from its foreground, also for detecting and correcting the skew that leads to redress the document. Here, a hybrid method is proposed in order to locate words and characters in both handwritten and printed documents. Numerical results prove the robustness and the high precision of our approach applied on old degraded document images over four common datasets, in which the pair (Recall, Precision) reaches approximately 97.7% and 97.9%.


Author(s):  
YAN ZHANG ◽  
BIN YU ◽  
HAI-MING GU

Document image segmentation is an important research area of document image analysis which classifies the contents of a document image into a set of text and non-text classes. Previous existing methods are often designed to classify text and halftone therefore they perform poorly in classifying graphics, tables and circuit, etc. In this paper, we present a robust multi-level classification method using multi-layer perceptron (MLP) and support vector machine (SVM) to segment the texts from non-texts and thereafter classify them as tables, graphics and halftones. This method outperforms previously existing methods by overcoming various issues associated with the complexity of document images. Experimental results prove the effectiveness of our proposed method. By virtue of our multi-level classification approach, the text components, halftone components, graphic components and table components are accurately classified respectively which would highly improve OCR accuracy to reduce garbage symbols as well as increase compression ratio thereafter simultaneously.


Author(s):  
Yung-Kuan Chan ◽  
Tung-Shou Chen ◽  
Yu-An Ho

With the rapid progress of digital image technology, the management of duplicate document images is also emphasized widely. As a result, this paper suggests a duplicate Chinese document image retrieval (DCDIR) system, which uses the ratio of the number of black pixels to that of white pixels on the scanned line segments in a character image block as the feature of the character image block. Experimental results indicate that the system can indeed effectively and quickly retrieve the desired duplicate Chinese document image from a database.


2022 ◽  
pp. 811-822
Author(s):  
B.V. Dhandra ◽  
Satishkumar Mallappa ◽  
Gururaj Mukarambi

In this article, the exhaustive experiment is carried out to test the performance of the Segmentation based Fractal Texture Analysis (SFTA) features with nt = 4 pairs, and nt = 8 pairs, geometric features and their combinations. A unified algorithm is designed to identify the scripts of the camera captured bi-lingual document image containing International language English with each one of Hindi, Kannada, Telugu, Malayalam, Bengali, Oriya, Punjabi, and Urdu scripts. The SFTA algorithm decomposes the input image into a set of binary images from which the fractal dimension of the resulting regions are computed in order to describe the segmented texture patterns. This motivates use of the SFTA features as the texture features to identify the scripts of the camera-based document image, which has an effect of non-homogeneous illumination (Resolution). An experiment is carried on eleven scripts each with 1000 sample images of block sizes 128 × 128, 256 × 256, 512 × 512 and 1024 × 1024. It is observed that the block size 512 × 512 gives the maximum accuracy of 86.45% for Gujarathi and English script combination and is the optimal size. The novelty of this article is that unified algorithm is developed for the script identification of bilingual document images.


2000 ◽  
Author(s):  
Mohamed N. Ahmed ◽  
Brian E. Cooper ◽  
Shaun T. Love

Sign in / Sign up

Export Citation Format

Share Document