region representation
Recently Published Documents


TOTAL DOCUMENTS

47
(FIVE YEARS 11)

H-INDEX

10
(FIVE YEARS 2)

2021 ◽  
pp. 108229
Author(s):  
Yang Zhao ◽  
Xiaohan Yu ◽  
Yongsheng Gao ◽  
Chunhua Shen

Author(s):  
Yatesh Manghate ◽  
Prabha Nair ◽  
Pranita Chaudhary ◽  
Mitali Mishra ◽  
Ninad Bhivgade ◽  
...  

In the recent period, many real-world applications and institutions generates a huge amount of data which is unstructured i.e., in the form of images containing data, receipts, invoices, forms, statements, contracts etc. This rich and detailed information presented in the text is of great significance in computer vision-based applications (driverless cars, assisting blind and visually impaired people, detecting labels and packages, automatic number plate recognition etc.). Recently, there has been a hike in the efforts, researches and progresses being done in this domain for its significance in data analysis and computer vision. Here has been a diversity of challenges in unstructured data like image sensor noise, different viewing angles, blur, lighting conditions, resolution, and non-planar object. Our objective for taking up this topic for research are i) to detect and recognize the text from the data ii) to handle diversity and variability of text in natural scene iii) to explore various datasets iv) to deal with various issues occurring in scene text detection. To tackle this problem, we propose a robust scene text detection and recognition method with adaptive text region representation using deep learning model open CV with EAST algorithm as detection pipeline and tesseract. The recurrent neural network-based adaptive text region representation is proposed for text region refinement, where a pair of boundary points are predicted each time step until no new points are found. In this way, text regions in an image are detected and represented with the adaptive number of boundary points.


2021 ◽  
Vol 13 (14) ◽  
pp. 2706
Author(s):  
Shenjin Huang ◽  
Wenting Han ◽  
Haipeng Chen ◽  
Guang Li ◽  
Jiandong Tang

An improved semantic segmentation method based on object contextual representations network (OCRNet) is proposed to accurately identify zucchinis intercropped with sunflowers from unmanned aerial vehicle (UAV) visible images taken over Hetao Irrigation District, Inner Mongolia, China. The proposed method improves on the performance of OCRNet in two respects. First, based on the object region context extraction structure of the OCRNet, a branch that uses the channel attention module was added in parallel to rationally use channel feature maps with different weights and reduce the noise of invalid channel features. Secondly, Lovász-Softmax loss was introduced to improve the accuracy of the object region representation in the OCRNet and optimize the final segmentation result at the object level. We compared the proposed method with extant advanced semantic segmentation methods (PSPNet, DeepLabV3+, DNLNet, and OCRNet) in two test areas to test its effectiveness. The results showed that the proposed method achieved the best semantic segmentation effect in the two test areas. More specifically, our method performed better in processing image details, segmenting field edges, and identifying intercropping fields. The proposed method has significant advantages for crop classification and intercropping recognition based on UAV visible images, and these advantages are more substantive in object-level evaluation metrics (mIoU and intercropping IoU).


2021 ◽  
Vol 128 (3) ◽  
pp. 985-1006
Author(s):  
Chaowei Lin ◽  
Feifei Lee ◽  
Jiawei Cai ◽  
Hanqing Chen ◽  
Qiu Chen

Author(s):  
Zhongke Wu ◽  
Xingce Wang ◽  
Shaolong Liu ◽  
Quan Chen ◽  
Hock-Soon Seah ◽  
...  

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 102106-102118
Author(s):  
Xiufeng Jiang ◽  
Shugong Xu ◽  
Shunqing Zhang ◽  
Shan Cao

2019 ◽  
Vol 29 (8) ◽  
pp. 2453-2466 ◽  
Author(s):  
Jianjun Lei ◽  
Lijie Niu ◽  
Huazhu Fu ◽  
Bo Peng ◽  
Qingming Huang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document