Bottom-up saliency detection with sparse representation of learnt texture atoms

2016 ◽  
Vol 60 ◽  
pp. 348-360 ◽  
Author(s):  
Mai Xu ◽  
Lai Jiang ◽  
Zhaoting Ye ◽  
Zulin Wang
Author(s):  
Gaoxiang Zhang ◽  
Feng Jiang ◽  
Debin Zhao ◽  
Xiaoshuai Sun ◽  
Shaohui Liu

Author(s):  
Jila Hosseinkhani ◽  
Chris Joslin

A key factor in designing saliency detection algorithms for videos is to understand how different visual cues affect the human perceptual and visual system. To this end, this article investigated the bottom-up features including color, texture, and motion in video sequences for a one-by-one scenario to provide a ranking system stating the most dominant circumstances for each feature. In this work, it is considered the individual features and various visual saliency attributes investigated under conditions in which the authors had no cognitive bias. Human cognition refers to a systematic pattern of perceptual and rational judgments and decision-making actions. First, this paper modeled the test data as 2D videos in a virtual environment to avoid any cognitive bias. Then, this paper performed an experiment using human subjects to determine which colors, textures, motion directions, and motion speeds attract human attention more. The proposed benchmark ranking system of salient visual attention stimuli was achieved using an eye tracking procedure.


Author(s):  
Xiaoshuai Sun ◽  
Hongxun Yao ◽  
Rongrong Ji ◽  
Pengfei Xu ◽  
Xianming Liu ◽  
...  

Author(s):  
Yuming Fang ◽  
Weisi Lin ◽  
Bu-Sung Lee ◽  
Chiew Tong Lau ◽  
Chia-Wen Lin

2014 ◽  
Vol 568-570 ◽  
pp. 659-662
Author(s):  
Xue Jun Zhang ◽  
Bing Liang Hu

The paper proposes a new approach to single-image super resolution (SR), which is based on sparse representation. Previous researchers just focus on the global intensive patch, without local intensive patch. The performance of dictionary trained by the local saliency intensive patch is more significant. Motivated by this, we joined the saliency detection to detect marked area in the image. We proposed a sparse representation for saliency patch of the low-resolution input, and used the coefficients of this representation to generate the high-resolution output. Compared to precious approaches which simply sample a large amount of image patch pairs, the saliency dictionary pair is a more compact representation of the patch pairs, reducing the computational cost substantially. Through the experiment, we demonstrate that our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods.


Sign in / Sign up

Export Citation Format

Share Document