RANSP: Ranking Attention Network for Saliency Prediction on Omnidirectional Images

2021 ◽  
Author(s):  
Dandan Zhu ◽  
Yongqing Chen ◽  
Xiongkuo Min ◽  
Yucheng Zhu ◽  
Guokai Zhang ◽  
...  
Author(s):  
Dandan Zhu ◽  
Yongqing Chen ◽  
Defang Zhao ◽  
Qiangqiang Zhou ◽  
Xiaokang Yang

2020 ◽  
Vol 10 (17) ◽  
pp. 5806 ◽  
Author(s):  
Yuzhen Chen ◽  
Wujie Zhou

Depth information has been widely used to improve RGB-D salient object detection by extracting attention maps to determine the position information of objects in an image. However, non-salient objects may be close to the depth sensor and present high pixel intensities in the depth maps. This situation in depth maps inevitably leads to erroneously emphasize non-salient areas and may have a negative impact on the saliency results. To mitigate this problem, we propose a hybrid attention neural network that fuses middle- and high-level RGB features with depth features to generate a hybrid attention map to remove background information. The proposed network extracts multilevel features from RGB images using the Res2Net architecture and then integrates high-level features from depth maps using the Inception-v4-ResNet2 architecture. The mixed high-level RGB features and depth features generate the hybrid attention map, which is then multiplied to the low-level RGB features. After decoding by several convolutions and upsampling, we obtain the final saliency prediction, achieving state-of-the-art performance on the NJUD and NLPR datasets. Moreover, the proposed network has good generalization ability compared with other methods. An ablation study demonstrates that the proposed network effectively performs saliency prediction even when non-salient objects interfere detection. In fact, after removing the branch with high-level RGB features, the RGB attention map that guides the network for saliency prediction is lost, and all the performance measures decline. The resulting prediction map from the ablation study shows the effect of non-salient objects close to the depth sensor. This effect is not present when using the complete hybrid attention network. Therefore, RGB information can correct and supplement depth information, and the corresponding hybrid attention map is more robust than using a conventional attention map constructed only with depth information.


2021 ◽  
pp. 1-1
Author(s):  
Ziqiang Wang ◽  
Zhi Liu ◽  
Gongyang Li ◽  
Yang Wang ◽  
Tianhong Zhang ◽  
...  

Author(s):  
Dandan Zhu ◽  
Yongqing Chen ◽  
Defang Zhao ◽  
Xiongkuo Min ◽  
Qiangqiang Zhou ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document