scholarly journals RGB-D Image Saliency Detection Based on Cross-Model Feature Fusion

2021 ◽  
Vol 33 (11) ◽  
pp. 1688-1697
Author(s):  
Zheng Chen ◽  
Xiaoli Zhao ◽  
Jiaying Zhang ◽  
Mingchen Yin ◽  
Hanchen Ye ◽  
...  
2022 ◽  
Vol 2022 ◽  
pp. 1-9
Author(s):  
Songshang Zou ◽  
Wenshu Chen ◽  
Hao Chen

Image saliency object detection can rapidly extract useful information from image scenes and further analyze it. At present, the traditional saliency target detection technology still has the edge of outstanding target that cannot be well preserved. Convolutional neural network (CNN) can extract highly general deep features from the images and effectively express the essential feature information of the images. This paper designs a model which applies CNN in deep saliency object detection tasks. It can efficiently optimize the edges of foreground objects and realize highly efficient image saliency detection through multilayer continuous feature extraction, refinement of layered boundary, and initial saliency feature fusion. The experimental result shows that the proposed method can achieve more robust saliency detection to adjust itself to complex background environment.


2021 ◽  
Vol 33 (3) ◽  
pp. 376-384
Author(s):  
Yihan Zhang ◽  
Zhaohui Zhang ◽  
Lina Huo ◽  
Bin Xie ◽  
Xiuqing Wang

Author(s):  
Hengliang Zhu ◽  
Xin Tan ◽  
Zhiwen Shao ◽  
Yangyang Hao ◽  
Lizhuang Ma

2020 ◽  
Vol 57 (4) ◽  
pp. 041020
Author(s):  
张莹莹 Zhang Yingying ◽  
葛洪伟 Ge Hongwei

Author(s):  
Bo Li ◽  
Zhengxing Sun ◽  
Yuqi Guo

Image saliency detection has recently witnessed rapid progress due to deep neural networks. However, there still exist many important problems in the existing deep learning based methods. Pixel-wise convolutional neural network (CNN) methods suffer from blurry boundaries due to the convolutional and pooling operations. While region-based deep learning methods lack spatial consistency since they deal with each region independently. In this paper, we propose a novel salient object detection framework using a superpixelwise variational autoencoder (SuperVAE) network. We first use VAE to model the image background and then separate salient objects from the background through the reconstruction residuals. To better capture semantic and spatial contexts information, we also propose a perceptual loss to take advantage from deep pre-trained CNNs to train our SuperVAE network. Without the supervision of mask-level annotated data, our method generates high quality saliency results which can better preserve object boundaries and maintain the spatial consistency. Extensive experiments on five wildly-used benchmark datasets show that the proposed method achieves superior or competitive performance compared to other algorithms including the very recent state-of-the-art supervised methods.


Author(s):  
Xiaoshan Yang ◽  
Jianbing Shen ◽  
Chao Liang ◽  
Yun Zhu

Sign in / Sign up

Export Citation Format

Share Document