Scene text detection based on multi-scale SWT and edge filtering

Author(s):  
Yuanyuan Feng ◽  
Yonghong Song ◽  
Yuanlin Zhang
2020 ◽  
Vol 98 ◽  
pp. 107026 ◽  
Author(s):  
Wenhao He ◽  
Xu-Yao Zhang ◽  
Fei Yin ◽  
Zhenbo Luo ◽  
Jean-Marc Ogier ◽  
...  

Author(s):  
Yuxin Wang ◽  
Hongtao Xie ◽  
Zilong Fu ◽  
Yongdong Zhang

Nowadays, scene text detection has become increasingly important and popular. However, the large variance of text scale remains the main challenge and limits the detection performance in most previous methods. To address this problem, we propose an end-to-end architecture called Deep Scale Relationship Network (DSRN) to map multi-scale convolution features onto a scale invariant space to obtain uniform activation of multi-size text instances. Firstly, we develop a Scale-transfer module to transfer the multi-scale feature maps to a unified dimension. Due to the heterogeneity of features, simply concatenating feature maps with multi-scale information would limit the detection performance. Thus we propose a Scale Relationship module to aggregate the multi-scale information through bi-directional convolution operations. Finally, to further reduce the miss-detected instances, a novel Recall Loss is proposed to force the network to concern more about miss-detected text instances by up-weighting poor-classified examples. Compared with previous approaches, DSRN efficiently handles the large-variance scale problem without complex hand-crafted hyperparameter settings (e.g. scale of default boxes) and complicated post processing. On standard datasets including ICDAR2015 and MSRA-TD500, the proposed algorithm achieves the state-of-art performance with impressive speed (8.8 FPS on ICDAR2015 and 13.3 FPS on MSRA-TD500).


2016 ◽  
Vol 214 ◽  
pp. 1011-1025 ◽  
Author(s):  
Hui Wu ◽  
Beiji Zou ◽  
Yu-qian Zhao ◽  
Zailiang Chen ◽  
Chengzhang Zhu ◽  
...  

2020 ◽  
Vol 12 (11) ◽  
pp. 200
Author(s):  
Haiyan Li ◽  
Hongtao Lu

Text detection is a prerequisite for text recognition in scene images. Previous segmentation-based methods for detecting scene text have already achieved a promising performance. However, these kinds of approaches may produce spurious text instances, as they usually confuse the boundary of dense text instances, and then infer word/text line instances relying heavily on meticulous heuristic rules. We propose a novel Assembling Text Components (AT-text) that accurately detects dense text in scene images. The AT-text localizes word/text line instances in a bottom-up mechanism by assembling a parsimonious component set. We employ a segmentation model that encodes multi-scale text features, considerably improving the classification accuracy of text/non-text pixels. The text candidate components are finely classified and selected via discriminate segmentation results. This allows the AT-text to efficiently filter out false-positive candidate components, and then to assemble the remaining text components into different text instances. The AT-text works well on multi-oriented and multi-language text without complex post-processing and character-level annotation. Compared with the existing works, it achieves satisfactory results and a considerable balance between precision and recall without a large margin in ICDAR2013 and MSRA-TD 500 public benchmark datasets.


Sign in / Sign up

Export Citation Format

Share Document