Influence of Viewpoint on Visual Saliency Models for Volumetric Content

Author(s):  
Mona Abid ◽  
Matthieu Perreira Da Silva ◽  
Patrick Le Callet
2019 ◽  
Vol 9 (24) ◽  
pp. 5378 ◽  
Author(s):  
Maria Wahid ◽  
Asim Waris ◽  
Syed Omer Gilani ◽  
Ramanathan Subramanian

Saliency is the quality of an object that makes it stands out from neighbouring items and grabs viewer attention. Regarding image processing, it refers to the pixel or group of pixels that stand out in an image or a video clip and capture the attention of the viewer. Our eye movements are usually guided by saliency while inspecting a scene. Rapid detection of emotive stimuli an ability possessed by humans. Visual objects in a scene are also emotionally salient. As different images and clips can elicit different emotional responses in a viewer such as happiness or sadness, there is a need to measure these emotions along with visual saliency. This study was conducted to determine whether the existing available visual saliency models can also measure emotional saliency. A classical Graph-Based Visual Saliency (GBVS) model is used in the study. Results show that there is low saliency or salient features in sad movies with at least a significant difference of 0.05 between happy and sad videos as well as a large mean difference of 76.57 and 57.0, hence making these videos less emotionally salient. However, overall visual content does not capture emotional salience. The applied Graph-Based Visual Saliency model notably identified happy emotions but could not analyze sad emotions.


2018 ◽  
Vol 2018 ◽  
pp. 1-16
Author(s):  
Ye Liang ◽  
Congyan Lang ◽  
Jian Yu ◽  
Hongzhe Liu ◽  
Nan Ma

The popularity of social networks has brought the rapid growth of social images which have become an increasingly important image type. One of the most obvious attributes of social images is the tag. However, the sate-of-the-art methods fail to fully exploit the tag information for saliency detection. Thus this paper focuses on salient region detection of social images using both image appearance features and image tag cues. First, a deep convolution neural network is built, which considers both appearance features and tag features. Second, tag neighbor and appearance neighbor based saliency aggregation terms are added to the saliency model to enhance salient regions. The aggregation method is dependent on individual images and considers the performance gaps appropriately. Finally, we also have constructed a new large dataset of challenging social images and pixel-wise saliency annotations to promote further researches and evaluations of visual saliency models. Extensive experiments show that the proposed method performs well on not only the new dataset but also several state-of-the-art saliency datasets.


PLoS ONE ◽  
2014 ◽  
Vol 9 (12) ◽  
pp. e114539 ◽  
Author(s):  
Renwu Gao ◽  
Seiichi Uchida ◽  
Asif Shahab ◽  
Faisal Shafait ◽  
Volkmar Frinken

Sign in / Sign up

Export Citation Format

Share Document