View: Visual Information Extraction Widget for improving chart images accessibility

Author(s):  
Jinglun Gao ◽  
Yin Zhou ◽  
Kenneth E. Barner
PLoS ONE ◽  
2020 ◽  
Vol 15 (9) ◽  
pp. e0239305
Author(s):  
Isabelle Charbonneau ◽  
Karolann Robinson ◽  
Caroline Blais ◽  
Daniel Fiset

Author(s):  
Guozhi Tang ◽  
Lele Xie ◽  
Lianwen Jin ◽  
Jiapeng Wang ◽  
Jingdong Chen ◽  
...  

Visual Information Extraction (VIE) task aims to extract key information from multifarious document images (e.g., invoices and purchase receipts). Most previous methods treat the VIE task simply as a sequence labeling problem or classification problem, which requires models to carefully identify each kind of semantics by introducing multimodal features, such as font, color, layout. But simply introducing multimodal features can't work well when faced with numeric semantic categories or some ambiguous texts. To address this issue, in this paper we propose a novel key-value matching model based on a graph neural network for VIE (MatchVIE). Through key-value matching based on relevancy evaluation, the proposed MatchVIE can bypass the recognitions to various semantics, and simply focuses on the strong relevancy between entities. Besides, we introduce a simple but effective operation, Num2Vec, to tackle the instability of encoded values, which helps model converge more smoothly. Comprehensive experiments demonstrate that the proposed MatchVIE can significantly outperform previous methods. Notably, to the best of our knowledge, MatchVIE may be the first attempt to tackle the VIE task by modeling the relevancy between keys and values and it is a good complement to the existing methods.


2014 ◽  
Vol 14 (10) ◽  
pp. 1279-1279
Author(s):  
S. Lafortune ◽  
C. Blais ◽  
K. Robinson ◽  
J. Royer ◽  
J. Duncan ◽  
...  

Author(s):  
Jiapeng Wang ◽  
Tianwei Wang ◽  
Guozhi Tang ◽  
Lianwen Jin ◽  
Weihong Ma ◽  
...  

Visual information extraction (VIE) has attracted increasing attention in recent years. The existing methods usually first organized optical character recognition (OCR) results in plain texts and then utilized token-level category annotations as supervision to train a sequence tagging model. However, it expends great annotation costs and may be exposed to label confusion, the OCR errors will also significantly affect the final performance. In this paper, we propose a unified weakly-supervised learning framework called TCPNet (Tag, Copy or Predict Network), which introduces 1) an efficient encoder to simultaneously model the semantic and layout information in 2D OCR results, 2) a weakly-supervised training method that utilizes only sequence-level supervision; and 3) a flexible and switchable decoder which contains two inference modes: one (Copy or Predict Mode) is to output key information sequences of different categories by copying a token from the input or predicting one in each time step, and the other (Tag Mode) is to directly tag the input sequence in a single forward pass. Our method shows new state-of-the-art performance on several public benchmarks, which fully proves its effectiveness.


Sign in / Sign up

Export Citation Format

Share Document