Sparse Representation of the Human Vision Information and the Saliency Detection Algorithm

2014 ◽  
Vol 513-517 ◽  
pp. 3349-3353
Author(s):  
Ju Bo Jin ◽  
Yu Xi Liu

Representation and measurement are two important issues for saliency models. Different with previous works that learnt sparse features from large scale natural statistics, we propose to learn features from short-term statistics of single images. For saliency measurement, we defined basic firing rate (BFR) for each sparse feature, and then we propose to use feature activity rate (FAR) to measure the bottom-up visual saliency. The proposed FAR measure is biological plausible and easy to compute and with satisfied performance. Experiments on human trajectory positioning and psychological patterns demonstrate the effectiveness and robustness of our proposed method.

2012 ◽  
Vol 239-240 ◽  
pp. 811-815
Author(s):  
Zhi Hai Sun ◽  
Teng Song ◽  
Wen Hui Zhou ◽  
Hua Zhang

Visual saliency detection has become an important step between computer vision and digital image processing. Recent methods almost form a computational model based on color, which are difficult to overcome the shortcoming with cluttered and textured background. This paper proposes a novel salient object detection algorithm integrating with region color contrast and histograms of oriented gradients (HoG). Extensively experiments show that our algorithm outperforms other state-of-art saliency methods, yielding higher precision and better recall rate, even lower mean absolution error.


2021 ◽  
Author(s):  
◽  
Aisha Ajmal

<p>The human vision system (HVS) collects a huge amount of information and performs a variety of biological mechanisms to select relevant information. Computational models based on these biological mechanisms are used in machine vision to select interesting or salient regions in the images for application in scene analysis, object detection and object tracking.  Different object tracking techniques have been proposed often using complex processing methods. On the other hand, attention-based computational models have shown significant performance advantages in various applications. We hypothesise the integration of a visual attention model with object tracking can be effective in increasing the performance by reducing the detection complexity in challenging environments such as illumination change, occlusion, and camera moving.  The overall objective of this thesis is to develop a visual saliency based object tracker that alternates between targets using a measure of current uncertainty derived from a Kalman filter. This thesis presents the results by showing the effectiveness of the tracker using the mean square error when compared to a tracker without the uncertainty mechanism.   Specific colour spaces can contribute to the identification of salient regions. The investigation is done between the non-uniform red, green and blue (RGB) derived opponencies with the hue, saturation and value (HSV) colour space using video information. The main motivation for this particular comparison is to improve the quality of saliency detection in challenging situations such as lighting changes. Precision-Recall curves are used to compare the colour spaces using pyramidal and non-pyramidal saliency models.</p>


Author(s):  
Xiaoshuai Sun ◽  
Hongxun Yao ◽  
Rongrong Ji ◽  
Pengfei Xu ◽  
Xianming Liu ◽  
...  

Symmetry ◽  
2019 ◽  
Vol 11 (3) ◽  
pp. 296 ◽  
Author(s):  
Md. Layek ◽  
A. Uddin ◽  
Tuyen Le ◽  
TaeChoong Chung ◽  
Eui-Nam Huh

Objective image quality assessment (IQA) is imperative in the current multimedia-intensive world, in order to assess the visual quality of an image at close to a human level of ability. Many parameters such as color intensity, structure, sharpness, contrast, presence of an object, etc., draw human attention to an image. Psychological vision research suggests that human vision is biased to the center area of an image and display screen. As a result, if the center part contains any visually salient information, it draws human attention even more and any distortion in that part will be better perceived than other parts. To the best of our knowledge, previous IQA methods have not considered this fact. In this paper, we propose a full reference image quality assessment (FR-IQA) approach using visual saliency and contrast; however, we give extra attention to the center by increasing the sensitivity of the similarity maps in that region. We evaluated our method on three large-scale popular benchmark databases used by most of the current IQA researchers (TID2008, CSIQ and LIVE), having a total of 3345 distorted images with 28 different kinds of distortions. Our method is compared with 13 state-of-the-art approaches. This comparison reveals the stronger correlation of our method with human-evaluated values. The prediction-of-quality score is consistent for distortion specific as well as distortion independent cases. Moreover, faster processing makes it applicable to any real-time application. The MATLAB code is publicly available to test the algorithm and can be found online at.


2013 ◽  
Vol 2013 ◽  
pp. 1-9
Author(s):  
Yuantao Chen ◽  
Weihong Xu ◽  
Fangjun Kuang ◽  
Shangbing Gao

Image segmentation process for high quality visual saliency map is very dependent on the existing visual saliency metrics. It is mostly only get sketchy effect of saliency map, and roughly based visual saliency map will affect the image segmentation results. The paper had presented the randomized visual saliency detection algorithm. The randomized visual saliency detection method can quickly generate the same size as the original input image and detailed results of the saliency map. The randomized saliency detection method can be applied to real-time requirements for image content-based scaling saliency results map. The randomization method for fast randomized video saliency area detection, the algorithm only requires a small amount of memory space can be detected detailed oriented visual saliency map, the presented results are shown that the method of visual saliency map used in image after the segmentation process can be an ideal segmentation results.


Sign in / Sign up

Export Citation Format

Share Document