scholarly journals Revisiting Feature Fusion for RGB-T Salient Object Detection

Author(s):  
Qiang Zhang ◽  
Tonglin Xiao ◽  
Nianchang Huang ◽  
Dingwen Zhang ◽  
Jungong Han
2021 ◽  
pp. 104337
Author(s):  
Jin Zhang ◽  
Yanjiao Shi ◽  
Qing Zhang ◽  
Liu Cui ◽  
Ying Chen ◽  
...  

2021 ◽  
pp. 104243
Author(s):  
Zhenyu Wang ◽  
Yunzhou Zhang ◽  
Yan Liu ◽  
Shichang Liu ◽  
Sonya Coleman ◽  
...  

2020 ◽  
Vol 29 ◽  
pp. 9165-9175
Author(s):  
Xuelong Li ◽  
Dawei Song ◽  
Yongsheng Dong

2019 ◽  
Vol 93 ◽  
pp. 521-533 ◽  
Author(s):  
Pingping Zhang ◽  
Wei Liu ◽  
Yinjie Lei ◽  
Huchuan Lu

2020 ◽  
Vol 34 (07) ◽  
pp. 12321-12328 ◽  
Author(s):  
Jun Wei ◽  
Shuhui Wang ◽  
Qingming Huang

Most of existing salient object detection models have achieved great progress by aggregating multi-level features extracted from convolutional neural networks. However, because of the different receptive fields of different convolutional layers, there exists big differences between features generated by these layers. Common feature fusion strategies (addition or concatenation) ignore these differences and may cause suboptimal solutions. In this paper, we propose the F3Net to solve above problem, which mainly consists of cross feature module (CFM) and cascaded feedback decoder (CFD) trained by minimizing a new pixel position aware loss (PPA). Specifically, CFM aims to selectively aggregate multi-level features. Different from addition and concatenation, CFM adaptively selects complementary components from input features before fusion, which can effectively avoid introducing too much redundant information that may destroy the original features. Besides, CFD adopts a multi-stage feedback mechanism, where features closed to supervision will be introduced to the output of previous layers to supplement them and eliminate the differences between features. These refined features will go through multiple similar iterations before generating the final saliency maps. Furthermore, different from binary cross entropy, the proposed PPA loss doesn't treat pixels equally, which can synthesize the local structure information of a pixel to guide the network to focus more on local details. Hard pixels from boundaries or error-prone parts will be given more attention to emphasize their importance. F3Net is able to segment salient object regions accurately and provide clear local details. Comprehensive experiments on five benchmark datasets demonstrate that F3Net outperforms state-of-the-art approaches on six evaluation metrics. Code will be released at https://github.com/weijun88/F3Net.


Photonics ◽  
2022 ◽  
Vol 9 (1) ◽  
pp. 44
Author(s):  
Zhehan Song ◽  
Zhihai Xu ◽  
Jing Wang ◽  
Huajun Feng ◽  
Qi Li

Proper features matter for salient object detection. Existing methods mainly focus on designing a sophisticated structure to incorporate multi-level features and filter out cluttered features. We present the dual-branch feature fusion network (DBFFNet), a simple effective framework mainly composed of three modules: global information perception module, local information concatenation module and refinement fusion module. The local information of a salient object is extracted from the local information concatenation module. The global information perception module exploits the U-Net structure to transmit the global information layer by layer. By employing the refinement fusion module, our approach is able to refine features from two branches and detect salient objects with final details without any post-processing. Experiments on standard benchmarks demonstrate that our method outperforms almost all of the state-of-the-art methods in terms of accuracy, and achieves the best performance in terms of speed under fair settings. Moreover, we design a wide-field optical system and combine with DBFFNet to achieve salient object detection with large field of view.


Sign in / Sign up

Export Citation Format

Share Document