scholarly journals Gradual Saliency Detection in Video Sequences Using Bottom-up Attributes

2020 ◽  
Author(s):  
Jila Hosseinkhani
2016 ◽  
Vol 60 ◽  
pp. 348-360 ◽  
Author(s):  
Mai Xu ◽  
Lai Jiang ◽  
Zhaoting Ye ◽  
Zulin Wang

Author(s):  
Jila Hosseinkhani ◽  
Chris Joslin

A key factor in designing saliency detection algorithms for videos is to understand how different visual cues affect the human perceptual and visual system. To this end, this article investigated the bottom-up features including color, texture, and motion in video sequences for a one-by-one scenario to provide a ranking system stating the most dominant circumstances for each feature. In this work, it is considered the individual features and various visual saliency attributes investigated under conditions in which the authors had no cognitive bias. Human cognition refers to a systematic pattern of perceptual and rational judgments and decision-making actions. First, this paper modeled the test data as 2D videos in a virtual environment to avoid any cognitive bias. Then, this paper performed an experiment using human subjects to determine which colors, textures, motion directions, and motion speeds attract human attention more. The proposed benchmark ranking system of salient visual attention stimuli was achieved using an eye tracking procedure.


2008 ◽  
Vol 20 (6) ◽  
pp. 1452-1472 ◽  
Author(s):  
Xiangyu Tang ◽  
Christoph von der Malsburg

This letter presents an improved cue integration approach to reliably separate coherent moving objects from their background scene in video sequences. The proposed method uses a probabilistic framework to unify bottom-up and top-down cues in a parallel, “democratic” fashion. The algorithm makes use of a modified Bayes rule where each pixel's posterior probabilities of figure or ground layer assignment are derived from likelihood models of three bottom-up cues and a prior model provided by a top-down cue. Each cue is treated as independent evidence for figure-ground separation. They compete with and complement each other dynamically by adjusting relative weights from frame to frame according to cue quality measured against the overall integration. At the same time, the likelihood or prior models of individual cues adapt toward the integrated result. These mechanisms enable the system to organize under the influence of visual scene structure without manual intervention. A novel contribution here is the incorporation of a top-down cue. It improves the system's robustness and accuracy and helps handle difficult and ambiguous situations, such as abrupt lighting changes or occlusion among multiple objects. Results on various video sequences are demonstrated and discussed. (Video demos are available at http://organic.usc.edu:8376/∼tangx/neco/index.html .)


Author(s):  
Yuming Fang ◽  
Weisi Lin ◽  
Bu-Sung Lee ◽  
Chiew Tong Lau ◽  
Chia-Wen Lin

2014 ◽  
Vol 644-650 ◽  
pp. 4603-4606
Author(s):  
Quan Quan Wan

In order to detect visually salient regions in video sequences, a motion saliency detection method is proposed. The motion vectors of each video frame is used to get two motion saliency features. One represents the uniqueness of the motion,and the other one represents the distribution of the motion in the video scene. Then, the Gaussian filtering is conducted to combine the two feature to make the motion saliency map, in which the salient regions or objects in the video sequences could be detected. The experimental results show that the proposed method could achieve excellent saliency detection performances.


Sign in / Sign up

Export Citation Format

Share Document