Fast visual saliency based on multi‐scale difference of Gaussians fusion in frequency domain

2020 ◽  
Vol 14 (16) ◽  
pp. 4039-4048
Author(s):  
Weipeng Li ◽  
Xiaogang Yang ◽  
Chuanxiang Li ◽  
Ruitao Lu ◽  
Xueli Xie
Author(s):  
Yanxiang Hu ◽  
Bo Zhang

A bio-inspired two-scale image complementarity evaluation method is proposed. This novel multi-scale method provides a promising alternative for the performance assessment of image fusion algorithms. Moreover, it can also be used to compare and analyze the multi-scale difference of raw images. Two metrics are presented and used to assess the complementarity of fusion images in non-subsampled contourlet transform (NSCT) domains: visual saliency differences (VSDs) at the coarse scales and detail similarities (DSs) at the fine scales. Visual attention mechanism (VAM)-based saliency maps are combined with NSCT low-pass subbands to compute the VSDs, and linear correlation and contrast consistency-based DSs are compared in NSCT band-pass subbands. Five main multi-scale transform (MST)-based fusion algorithms were compared by using 30 groups of raw images that consist of four types of fusion images. Effects of NSCT filters and decomposition levels on evaluation results are discussed in detail. Furthermore, a group of color multi-exposure fusion images were also taken as examples to evaluate the complementarity of raw images. Experimental results demonstrate the effectiveness of the proposed method, especially for MST-based image fusion algorithms.


2021 ◽  
Author(s):  
Sai Phani Kumar Malladi ◽  
Jayanta Mukhopadhyay ◽  
Chaker Larabi ◽  
Santanu Chaudhury

Electronics ◽  
2021 ◽  
Vol 10 (23) ◽  
pp. 2892
Author(s):  
Kyungjun Lee ◽  
Seungwoo Wee ◽  
Jechang Jeong

Salient object detection is a method of finding an object within an image that a person determines to be important and is expected to focus on. Various features are used to compute the visual saliency, and in general, the color and luminance of the scene are widely used among the spatial features. However, humans perceive the same color and luminance differently depending on the influence of the surrounding environment. As the human visual system (HVS) operates through a very complex mechanism, both neurobiological and psychological aspects must be considered for the accurate detection of salient objects. To reflect this characteristic in the saliency detection process, we have proposed two pre-processing methods to apply to the input image. First, we applied a bilateral filter to improve the segmentation results by smoothing the image so that only the overall context of the image remains while preserving the important borders of the image. Second, although the amount of light is the same, it can be perceived with a difference in the brightness owing to the influence of the surrounding environment. Therefore, we applied oriented difference-of-Gaussians (ODOG) and locally normalized ODOG (LODOG) filters that adjust the input image by predicting the brightness as perceived by humans. Experiments on five public benchmark datasets for which ground truth exists show that our proposed method further improves the performance of previous state-of-the-art methods.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 121330-121343
Author(s):  
Alessandro Bruno ◽  
Francesco Gugliuzza ◽  
Roberto Pirrone ◽  
Edoardo Ardizzone

2014 ◽  
Vol 602-605 ◽  
pp. 2238-2241
Author(s):  
Jian Kun Chen ◽  
Zhi Wei Kang

In this paper, we present a new visual saliency model, which based on Wavelet Transform and simple Priors. Firstly, we create multi-scale feature maps to represent different features from edge to texture in wavelet transform. Then we modulate local saliency at a location and its global saliency, combine the local saliency and global saliency to generate a new saliency .Finally, the final saliency is generated by combining the new saliency and two simple priors (color prior an location prior). Experimental evaluation shows the proposed model can achieve state-of-the-art results and better than the other models on a public available benchmark dataset.


2017 ◽  
Author(s):  
Soma Mitra ◽  
Deabasis Mazumdar ◽  
Kuntal Ghosh ◽  
Kamales Bhaumik

The perceived lightness of a stimulus depends on its background, a phenomenon known as lightness induction. For instance, the same gray stimulus can look light in one background and dark in another. Moreover, such induction can take place in two directions; in one case, it occurs in the direction of the background lightness known as lightness assimilation, while in the other it occurs opposite to that, known as lightness contrast. The White’s illusion is a typical one which does not completely conform to any of these two processes. In this paper, we have quantified the perceptual strength of the White’s illusion as a function of the width of the background square grating. Based on our results which also corroborate some earlier studies, we propose a linear filtering model inspired from an earlier work dealing with varying Mach band widths. Our model assumes that the for the White’s illusion, where the edges are strong and many in number, and as such the spectrum is rich in high frequency components, the inhibitory surround in the classical Difference-of-Gaussians (DoG) filter gets suppressed, so that the filter essentially reduces to a multi-scale Gaussian one. The simulation results with this model support the present as well as earlier experimental results.


2020 ◽  
Author(s):  
Fan Liu ◽  
Xue-Feng Liu ◽  
Ruo-Ming Lan ◽  
Xu-Ri Yao ◽  
Shen-Cheng Dou ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document