image saliency
Recently Published Documents


TOTAL DOCUMENTS

147
(FIVE YEARS 55)

H-INDEX

13
(FIVE YEARS 4)

2022 ◽  
Vol 2022 ◽  
pp. 1-9
Author(s):  
Songshang Zou ◽  
Wenshu Chen ◽  
Hao Chen

Image saliency object detection can rapidly extract useful information from image scenes and further analyze it. At present, the traditional saliency target detection technology still has the edge of outstanding target that cannot be well preserved. Convolutional neural network (CNN) can extract highly general deep features from the images and effectively express the essential feature information of the images. This paper designs a model which applies CNN in deep saliency object detection tasks. It can efficiently optimize the edges of foreground objects and realize highly efficient image saliency detection through multilayer continuous feature extraction, refinement of layered boundary, and initial saliency feature fusion. The experimental result shows that the proposed method can achieve more robust saliency detection to adjust itself to complex background environment.


2021 ◽  
Vol 33 (11) ◽  
pp. 1688-1697
Author(s):  
Zheng Chen ◽  
Xiaoli Zhao ◽  
Jiaying Zhang ◽  
Mingchen Yin ◽  
Hanchen Ye ◽  
...  

2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Andrea Zangrossi ◽  
Giorgia Cona ◽  
Miriam Celli ◽  
Marco Zorzi ◽  
Maurizio Corbetta

AbstractWhen looking at visual images, the eyes move to the most salient and behaviourally relevant objects. Saliency and semantic information significantly explain where people look. Less is known about the spatiotemporal properties of eye movements (i.e., how people look). We show that three latent variables explain 60% of eye movement dynamics of more than a hundred observers looking at hundreds of different natural images. The first component explaining 30% of variability loads on fixation duration, and it does not relate to image saliency or semantics; it approximates a power-law distribution of gaze steps, an intrinsic dynamic measure, and identifies observers with two viewing styles: static and dynamic. Notably, these viewing styles were also identified when observers look at a blank screen. These results support the importance of endogenous processes such as intrinsic dynamics to explain eye movement spatiotemporal properties.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Taylor R. Hayes ◽  
John M. Henderson

AbstractDeep saliency models represent the current state-of-the-art for predicting where humans look in real-world scenes. However, for deep saliency models to inform cognitive theories of attention, we need to know how deep saliency models prioritize different scene features to predict where people look. Here we open the black box of three prominent deep saliency models (MSI-Net, DeepGaze II, and SAM-ResNet) using an approach that models the association between attention, deep saliency model output, and low-, mid-, and high-level scene features. Specifically, we measured the association between each deep saliency model and low-level image saliency, mid-level contour symmetry and junctions, and high-level meaning by applying a mixed effects modeling approach to a large eye movement dataset. We found that all three deep saliency models were most strongly associated with high-level and low-level features, but exhibited qualitatively different feature weightings and interaction patterns. These findings suggest that prominent deep saliency models are primarily learning image features associated with high-level scene meaning and low-level image saliency and highlight the importance of moving beyond simply benchmarking performance.


Author(s):  
Biao Lu ◽  
Nannan Liang ◽  
Chengfang Tan ◽  
Zhenggao Pan

The traditional salient object detection algorithms are used to apply the underlying features and prior knowledge of the images. Based on conditional random field Markov chain and Adaboost image saliency detection technology, a saliency detection method is proposed to effectively reduce the error caused by the target approaching the edge, which mainly includes the use of absorption Markov chain model to generate the initial saliency map. In this model, the transition probability of each node is defined by the difference of color and texture between each super pixel, and the absorption time of the transition node is calculated as the significant value of each super pixel. A strong classifier optimization model based on Adaboost iterative algorithm is designed.The initial saliency map is processed by the classifier to obtain an optimized saliency map, which highlights the global contrast. In order to extract the saliency region of the final saliency map, a method using conditional random field is designed to segment and extract the saliency region. The results show that the saliency area detected by this method is prominent, the overall contour is clear and has high resolution. At the same time, this method has better performance in accuracy recall curve and histogram.


2021 ◽  
Author(s):  
Taylor R. Hayes ◽  
John M. Henderson

Abstract Deep saliency models represent the current state-of-the-art for predicting where humans look in real-world scenes. However, for deep saliency models to inform cognitive theories of attention, we need to know how deep saliency models predict where people look. Here we open the black box of deep saliency models using an approach that models the association between the output of 3 prominent deep saliency models (MSI-Net, DeepGaze II, and SAM-ResNet) and low-, mid-, and high-level scene features. Specifically, we measured the association between each deep saliency model and low-level image saliency, mid-level contour symmetry and junctions, and high-level meaning by applying a mixed effects modeling approach to a large eye movement dataset. We found that despite different architectures, training regimens, and loss functions, all three deep saliency models were most strongly associated with high-level meaning. These findings suggest that deep saliency models are primarily learning image features associated with scene meaning.


Sign in / Sign up

Export Citation Format

Share Document