scholarly journals Are RGB-based salient object detection methods unsuitable for light field data?

2020 ◽  
Vol 2020 (1) ◽  
Author(s):  
Yu Liu ◽  
Huaxin Xiao ◽  
Hanlin Tan ◽  
Ping Li

Abstract Considering the significant progress made on RGB-based deep salient object detection (SOD) methods, this paper seeks to bridge the gap between those 2D methods and 4D light field data, instead of implementing specific 4D methods. We observe that the performance of 2D methods changes dramatically with the input refocusing on different depths. This paper attempts to make the 2D methods available for light field SOD by learning to select the best single image from the 4D tensor. Given a 2D method, a deep model is proposed to explicitly compare pairs of SOD results on one light field sample. Moreover, a comparator module is designed to integrate the features from a pair, which provides more discriminative representations to classify. Experiments over 13 latest 2D methods and 2 datasets demonstrate the proposed method can bring about 24.0% and 5.3% average improvement of mean absolute error and F-measure, and outperform state-of-the-art 4D methods by a large margin.

Author(s):  
M. N. Favorskaya ◽  
L. C. Jain

Introduction:Saliency detection is a fundamental task of computer vision. Its ultimate aim is to localize the objects of interest that grab human visual attention with respect to the rest of the image. A great variety of saliency models based on different approaches was developed since 1990s. In recent years, the saliency detection has become one of actively studied topic in the theory of Convolutional Neural Network (CNN). Many original decisions using CNNs were proposed for salient object detection and, even, event detection.Purpose:A detailed survey of saliency detection methods in deep learning era allows to understand the current possibilities of CNN approach for visual analysis conducted by the human eyes’ tracking and digital image processing.Results:A survey reflects the recent advances in saliency detection using CNNs. Different models available in literature, such as static and dynamic 2D CNNs for salient object detection and 3D CNNs for salient event detection are discussed in the chronological order. It is worth noting that automatic salient event detection in durable videos became possible using the recently appeared 3D CNN combining with 2D CNN for salient audio detection. Also in this article, we have presented a short description of public image and video datasets with annotated salient objects or events, as well as the often used metrics for the results’ evaluation.Practical relevance:This survey is considered as a contribution in the study of rapidly developed deep learning methods with respect to the saliency detection in the images and videos.


2021 ◽  
pp. 1-13
Author(s):  
Yongri Piao ◽  
Yongyao Jiang ◽  
Miao Zhang ◽  
Jian Wang ◽  
Huchuan Lu

2021 ◽  
pp. 104352
Author(s):  
Yanhua Liang ◽  
Guihe Qin ◽  
Minghui Sun ◽  
Jun Qin ◽  
Jie Yan ◽  
...  

2017 ◽  
Vol 46 (3) ◽  
pp. 1083-1094 ◽  
Author(s):  
Anzhi Wang ◽  
Minghui Wang ◽  
Xiaoyan Li ◽  
Zetian Mi ◽  
Huan Zhou

2021 ◽  
Vol 7 (9) ◽  
pp. 187
Author(s):  
Seena Joseph ◽  
Oludayo O. Olugbara

Salient object detection represents a novel preprocessing stage of many practical image applications in the discipline of computer vision. Saliency detection is generally a complex process to copycat the human vision system in the processing of color images. It is a convoluted process because of the existence of countless properties inherent in color images that can hamper performance. Due to diversified color image properties, a method that is appropriate for one category of images may not necessarily be suitable for others. The selection of image abstraction is a decisive preprocessing step in saliency computation and region-based image abstraction has become popular because of its computational efficiency and robustness. However, the performances of the existing region-based salient object detection methods are extremely hooked on the selection of an optimal region granularity. The incorrect selection of region granularity is potentially prone to under- or over-segmentation of color images, which can lead to a non-uniform highlighting of salient objects. In this study, the method of color histogram clustering was utilized to automatically determine suitable homogenous regions in an image. Region saliency score was computed as a function of color contrast, contrast ratio, spatial feature, and center prior. Morphological operations were ultimately performed to eliminate the undesirable artifacts that may be present at the saliency detection stage. Thus, we have introduced a novel, simple, robust, and computationally efficient color histogram clustering method that agglutinates color contrast, contrast ratio, spatial feature, and center prior for detecting salient objects in color images. Experimental validation with different categories of images selected from eight benchmarked corpora has indicated that the proposed method outperforms 30 bottom-up non-deep learning and seven top-down deep learning salient object detection methods based on the standard performance metrics.


Detecting and segmenting salient objects in natural scenes, often referred to as salient object detection has attracted a lot of interest in computer vision and recently various heuristic computational models have been designed. While many models have been proposed and several applications have emerged, yet a deep understanding of achievements and issues is lacking. The aim of this review work is to study about the details of methods in salient object detection. It not only focuses on the methods to detect saliency objects, but also reviews the works related to spatio temporal video attention detection technique in video sequences. It also discusses the open issues in terms of evaluation metrics and dataset bias in model performance and suggests future research directions. The evaluation metrics are classified into mean absolute error (MAE), Accuracy and Run-Time complexity


Sign in / Sign up

Export Citation Format

Share Document