scholarly journals Semantic-Guided Attention Refinement Network for Salient Object Detection in Optical Remote Sensing Images

2021 ◽  
Vol 13 (11) ◽  
pp. 2163
Author(s):  
Zhou Huang ◽  
Huaixin Chen ◽  
Biyuan Liu ◽  
Zhixi Wang

Although remarkable progress has been made in salient object detection (SOD) in natural scene images (NSI), the SOD of optical remote sensing images (RSI) still faces significant challenges due to various spatial resolutions, cluttered backgrounds, and complex imaging conditions, mainly for two reasons: (1) accurate location of salient objects; and (2) subtle boundaries of salient objects. This paper explores the inherent properties of multi-level features to develop a novel semantic-guided attention refinement network (SARNet) for SOD of NSI. Specifically, the proposed semantic guided decoder (SGD) roughly but accurately locates the multi-scale object by aggregating multiple high-level features, and then this global semantic information guides the integration of subsequent features in a step-by-step feedback manner to make full use of deep multi-level features. Simultaneously, the proposed parallel attention fusion (PAF) module combines cross-level features and semantic-guided information to refine the object’s boundary and highlight the entire object area gradually. Finally, the proposed network architecture is trained through an end-to-end fully supervised model. Quantitative and qualitative evaluations on two public RSI datasets and additional NSI datasets across five metrics show that our SARNet is superior to 14 state-of-the-art (SOTA) methods without any post-processing.

2021 ◽  
Vol 30 ◽  
pp. 1305-1317
Author(s):  
Qijian Zhang ◽  
Runmin Cong ◽  
Chongyi Li ◽  
Ming-Ming Cheng ◽  
Yuming Fang ◽  
...  

2020 ◽  
Vol 415 ◽  
pp. 411-420 ◽  
Author(s):  
Chongyi Li ◽  
Runmin Cong ◽  
Chunle Guo ◽  
Hua Li ◽  
Chunjie Zhang ◽  
...  

2018 ◽  
Vol 10 (9) ◽  
pp. 1470 ◽  
Author(s):  
Yun Ren ◽  
Changren Zhu ◽  
Shunping Xiao

The region-based convolutional networks have shown their remarkable ability for object detection in optical remote sensing images. However, the standard CNNs are inherently limited to model geometric transformations due to the fixed geometric structures in its building modules. To address this, we introduce a new module named deformable convolution that is integrated into the prevailing Faster R-CNN. By adding 2D offsets to the regular sampling grid in the standard convolution, it learns the augmenting spatial sampling locations in the modules from target tasks without additional supervision. In our work, a deformable Faster R-CNN is constructed by substituting the standard convolution layer with a deformable convolution layer in the last network stage. Besides, top-down and skip connections are adopted to produce a single high-level feature map of a fine resolution, on which the predictions are to be made. To make the model robust to occlusion, a simple yet effective data augmentation technique is proposed for training the convolutional neural network. Experimental results show that our deformable Faster R-CNN improves the mean average precision by a large margin on the SORSI and HRRS dataset.


Sign in / Sign up

Export Citation Format

Share Document