Dual guidance enhanced network for light field salient object detection

2021 ◽  
pp. 104352
Author(s):  
Yanhua Liang ◽  
Guihe Qin ◽  
Minghui Sun ◽  
Jun Qin ◽  
Jie Yan ◽  
...  
2021 ◽  
pp. 1-13
Author(s):  
Yongri Piao ◽  
Yongyao Jiang ◽  
Miao Zhang ◽  
Jian Wang ◽  
Huchuan Lu

2017 ◽  
Vol 46 (3) ◽  
pp. 1083-1094 ◽  
Author(s):  
Anzhi Wang ◽  
Minghui Wang ◽  
Xiaoyan Li ◽  
Zetian Mi ◽  
Huan Zhou

2020 ◽  
Vol 29 ◽  
pp. 6276-6287 ◽  
Author(s):  
Miao Zhang ◽  
Wei Ji ◽  
Yongri Piao ◽  
Jingjing Li ◽  
Yu Zhang ◽  
...  

2020 ◽  
Vol 2020 (1) ◽  
Author(s):  
Yu Liu ◽  
Huaxin Xiao ◽  
Hanlin Tan ◽  
Ping Li

Abstract Considering the significant progress made on RGB-based deep salient object detection (SOD) methods, this paper seeks to bridge the gap between those 2D methods and 4D light field data, instead of implementing specific 4D methods. We observe that the performance of 2D methods changes dramatically with the input refocusing on different depths. This paper attempts to make the 2D methods available for light field SOD by learning to select the best single image from the 4D tensor. Given a 2D method, a deep model is proposed to explicitly compare pairs of SOD results on one light field sample. Moreover, a comparator module is designed to integrate the features from a pair, which provides more discriminative representations to classify. Experiments over 13 latest 2D methods and 2 datasets demonstrate the proposed method can bring about 24.0% and 5.3% average improvement of mean absolute error and F-measure, and outperform state-of-the-art 4D methods by a large margin.


Sign in / Sign up

Export Citation Format

Share Document