2021 ◽  
pp. 1-13
Author(s):  
Yongri Piao ◽  
Yongyao Jiang ◽  
Miao Zhang ◽  
Jian Wang ◽  
Huchuan Lu

2021 ◽  
pp. 104352
Author(s):  
Yanhua Liang ◽  
Guihe Qin ◽  
Minghui Sun ◽  
Jun Qin ◽  
Jie Yan ◽  
...  

2017 ◽  
Vol 46 (3) ◽  
pp. 1083-1094 ◽  
Author(s):  
Anzhi Wang ◽  
Minghui Wang ◽  
Xiaoyan Li ◽  
Zetian Mi ◽  
Huan Zhou

Author(s):  
Qiudan Zhang ◽  
Shiqi Wang ◽  
Xu Wang ◽  
Zhenhao Sun ◽  
Sam Kwong ◽  
...  

Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 231
Author(s):  
Zikai Da ◽  
Yu Gao ◽  
Zihan Xue ◽  
Jing Cao ◽  
Peizhen Wang

With the rise of deep learning technology, salient object detection algorithms based on convolutional neural networks (CNNs) are gradually replacing traditional methods. The majority of existing studies, however, focused on the integration of multi-scale features, thereby ignoring the characteristics of other significant features. To address this problem, we fully utilized the features to alleviate redundancy. In this paper, a novel CNN named local and global feature aggregation-aware network (LGFAN) has been proposed. It is a combination of the visual geometry group backbone for feature extraction, an attention module for high-quality feature filtering, and an aggregation module with a mechanism for rich salient features to ease the dilution process on the top-down pathway. Experimental results on five public datasets demonstrated that the proposed method improves computational efficiency while maintaining favorable performance.


2020 ◽  
Vol 29 ◽  
pp. 6276-6287 ◽  
Author(s):  
Miao Zhang ◽  
Wei Ji ◽  
Yongri Piao ◽  
Jingjing Li ◽  
Yu Zhang ◽  
...  

2020 ◽  
Vol 2020 (1) ◽  
Author(s):  
Yu Liu ◽  
Huaxin Xiao ◽  
Hanlin Tan ◽  
Ping Li

Abstract Considering the significant progress made on RGB-based deep salient object detection (SOD) methods, this paper seeks to bridge the gap between those 2D methods and 4D light field data, instead of implementing specific 4D methods. We observe that the performance of 2D methods changes dramatically with the input refocusing on different depths. This paper attempts to make the 2D methods available for light field SOD by learning to select the best single image from the 4D tensor. Given a 2D method, a deep model is proposed to explicitly compare pairs of SOD results on one light field sample. Moreover, a comparator module is designed to integrate the features from a pair, which provides more discriminative representations to classify. Experiments over 13 latest 2D methods and 2 datasets demonstrate the proposed method can bring about 24.0% and 5.3% average improvement of mean absolute error and F-measure, and outperform state-of-the-art 4D methods by a large margin.


Sign in / Sign up

Export Citation Format

Share Document