saliency detection
Recently Published Documents


TOTAL DOCUMENTS

1517
(FIVE YEARS 491)

H-INDEX

64
(FIVE YEARS 13)

2022 ◽  
Vol 2022 ◽  
pp. 1-9
Author(s):  
Songshang Zou ◽  
Wenshu Chen ◽  
Hao Chen

Image saliency object detection can rapidly extract useful information from image scenes and further analyze it. At present, the traditional saliency target detection technology still has the edge of outstanding target that cannot be well preserved. Convolutional neural network (CNN) can extract highly general deep features from the images and effectively express the essential feature information of the images. This paper designs a model which applies CNN in deep saliency object detection tasks. It can efficiently optimize the edges of foreground objects and realize highly efficient image saliency detection through multilayer continuous feature extraction, refinement of layered boundary, and initial saliency feature fusion. The experimental result shows that the proposed method can achieve more robust saliency detection to adjust itself to complex background environment.


2022 ◽  
Vol 2022 ◽  
pp. 1-14
Author(s):  
Liangliang Duan

Deep encoder-decoder networks have been adopted for saliency detection and achieved state-of-the-art performance. However, most existing saliency models usually fail to detect very small salient objects. In this paper, we propose a multitask architecture, M2Net, and a novel centerness-aware loss for salient object detection. The proposed M2Net aims to solve saliency prediction and centerness prediction simultaneously. Specifically, the network architecture is composed of a bottom-up encoder module, top-down decoder module, and centerness prediction module. In addition, different from binary cross entropy, the proposed centerness-aware loss can guide the proposed M2Net to uniformly highlight the entire salient regions with well-defined object boundaries. Experimental results on five benchmark saliency datasets demonstrate that M2Net outperforms state-of-the-art methods on different evaluation metrics.


Author(s):  
Fengyun Wang ◽  
Jinshan Pan ◽  
Shoukun Xu ◽  
Jinhui Tang
Keyword(s):  

2022 ◽  
pp. 1-14
Author(s):  
Xiaoli Sun ◽  
Xiujun Zhang ◽  
Chen Xu ◽  
Mingqing Xiao ◽  
Yuanyan Tang

2022 ◽  
pp. 116425
Author(s):  
De-Huai He ◽  
Kai-Fu Yang ◽  
Xue-Mei Wan ◽  
Fen Xiao ◽  
Hong-Mei Yan ◽  
...  
Keyword(s):  

Author(s):  
Xiaoqiang Wang ◽  
Lei Zhu ◽  
Siliang Tang ◽  
Huazhu Fu ◽  
Ping Li ◽  
...  

2021 ◽  
Vol 13 (24) ◽  
pp. 5144
Author(s):  
Baodi Liu ◽  
Lifei Zhao ◽  
Jiaoyue Li ◽  
Hengle Zhao ◽  
Weifeng Liu ◽  
...  

Deep learning has recently attracted extensive attention and developed significantly in remote sensing image super-resolution. Although remote sensing images are composed of various scenes, most existing methods consider each part equally. These methods ignore the salient objects (e.g., buildings, airplanes, and vehicles) that have more complex structures and require more attention in recovery processing. This paper proposes a saliency-guided remote sensing image super-resolution (SG-GAN) method to alleviate the above issue while maintaining the merits of GAN-based methods for the generation of perceptual-pleasant details. More specifically, we exploit the salient maps of images to guide the recovery in two aspects: On the one hand, the saliency detection network in SG-GAN learns more high-resolution saliency maps to provide additional structure priors. On the other hand, the well-designed saliency loss imposes a second-order restriction on the super-resolution process, which helps SG-GAN concentrate more on the salient objects of remote sensing images. Experimental results show that SG-GAN achieves competitive PSNR and SSIM compared with the advanced super-resolution methods. Visual results demonstrate our superiority in restoring structures while generating remote sensing super-resolution images.


Author(s):  
Haiyang He ◽  
Xiaolin Li ◽  
Zhihong Wang ◽  
Shiguo Huang
Keyword(s):  

Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8374
Author(s):  
Yupei Zhang ◽  
Kwok-Leung Chan

Detecting saliency in videos is a fundamental step in many computer vision systems. Saliency is the significant target(s) in the video. The object of interest is further analyzed for high-level applications. The segregation of saliency and the background can be made if they exhibit different visual cues. Therefore, saliency detection is often formulated as background subtraction. However, saliency detection is challenging. For instance, dynamic background can result in false positive errors. In another scenario, camouflage will result in false negative errors. With moving cameras, the captured scenes are even more complicated to handle. We propose a new framework, called saliency detection via background model completion (SD-BMC), that comprises a background modeler and a deep learning background/foreground segmentation network. The background modeler generates an initial clean background image from a short image sequence. Based on the idea of video completion, a good background frame can be synthesized with the co-existence of changing background and moving objects. We adopt the background/foreground segmenter, which was pre-trained with a specific video dataset. It can also detect saliency in unseen videos. The background modeler can adjust the background image dynamically when the background/foreground segmenter output deteriorates during processing a long video. To the best of our knowledge, our framework is the first one to adopt video completion for background modeling and saliency detection in videos captured by moving cameras. The F-measure results, obtained from the pan-tilt-zoom (PTZ) videos, show that our proposed framework outperforms some deep learning-based background subtraction models by 11% or more. With more challenging videos, our framework also outperforms many high-ranking background subtraction methods by more than 3%.


Sign in / Sign up

Export Citation Format

Share Document