Research on text detection method based on improved yolov3

Author(s):  
Huibai Wang ◽  
Hongqing Shi
Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 678
Author(s):  
Vladimir Tadic ◽  
Tatjana Loncar-Turukalo ◽  
Akos Odry ◽  
Zeljen Trpovski ◽  
Attila Toth ◽  
...  

This note presents a fuzzy optimization of Gabor filter-based object and text detection. The derivation of a 2D Gabor filter and the guidelines for the fuzzification of the filter parameters are described. The fuzzy Gabor filter proved to be a robust text an object detection method in low-quality input images as extensively evaluated in the problem of license plate localization. The extended set of examples confirmed that the fuzzy optimized Gabor filter with adequately fuzzified parameters detected the desired license plate texture components and highly improved the object detection when compared to the classic Gabor filter. The robustness of the proposed approach was further demonstrated on other images of various origin containing text and different textures, captured using low-cost or modest quality acquisition procedures. The possibility to fine tune the fuzzification procedure to better suit certain applications offers the potential to further boost detection performance.


2015 ◽  
Vol 75 (13) ◽  
pp. 7715-7738 ◽  
Author(s):  
Hui Wu ◽  
Bei-ji Zou ◽  
Yu-qian Zhao ◽  
Hong-pu Fu

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 122685-122694
Author(s):  
Xiao Qin ◽  
Jianhui Jiang ◽  
Chang-An Yuan ◽  
Shaojie Qiao ◽  
Wei Fan

2015 ◽  
Vol 9 (4) ◽  
pp. 500-510 ◽  
Author(s):  
Gang Zhou ◽  
Yuehu Liu ◽  
Liang Xu ◽  
Zhenhong Jia

Author(s):  
Shuhua Liu ◽  
Hua Ban ◽  
Yu Song ◽  
Mengyu Zhang ◽  
Fengqin Yang

In this study, a natural scene text detection method based on the improved faster region-based convolutional neural network (R-CNN) is proposed. This method extracts image features with the Inception-ResNet architecture, adopts a region proposal network to generate region proposals for the extracted features, merges the fine-tuned features with the region proposals, and finally, uses Fast R-CNN to classify and locate text. The proposed method solves the problems of varying text sizes and the text being obscured in the image. Compared with the original Faster R-CNN, the multilevel Inception-ResNet network model presented in this study can extract deeper text features. The extracted feature map is further sparsely represented by Reduction B, Inception ResNet C and Avg Pool, and then is fused with text regions obtained by the text feature mapping lower layer network to acquire the exact text regions. The text detection method presented in this study is tested on the 2017 dataset of ICDAR2017 Competition on Reading Chinese Text in the Wild (RCTW-17), which contains a large number of distorted, blurry, different scale and size texts. An accuracy of 76.4% is achieved in this platform, thereby proving the efficiency of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document