scholarly journals Visual Security Assessment via Saliency-Weighted Structure and Orientation Similarity for Selective Encrypted Images

2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Zhengguo Wu ◽  
Kai Zhang ◽  
Yannan Ren ◽  
Jing Li ◽  
Jiande Sun ◽  
...  

Selective encryption has been widely used in image privacy protection. Visual security assessment is necessary for the effectiveness and practicability of image encryption methods, and there have been a series of research studies on this aspect. However, these methods do not take into account perceptual factors. In this paper, we propose a new visual security assessment (VSA) by saliency-weighted structure and orientation similarity. Considering that the human visual perception is sensitive to the characteristics of selective encrypted images, we extract the structure and orientation feature maps, and then similarity measurements are conducted on these feature maps to generate the structure and orientation similarity maps. Next, we compute the saliency map of the original image. Then, a simple saliency-based pooling strategy is subsequently used to combine these measurements and generate the final visual security score. Extensive experiments are conducted on two public encryption databases, and the results demonstrate the superiority and robustness of our proposed VSA compared with the existing most advanced work.

2014 ◽  
Vol 1044-1045 ◽  
pp. 1049-1052 ◽  
Author(s):  
Chin Chen Chang ◽  
I Ta Lee ◽  
Tsung Ta Ke ◽  
Wen Kai Tai

Common methods for reducing image size include scaling and cropping. However, these two approaches have some quality problems for reduced images. In this paper, we propose an image reducing algorithm by separating the main objects and the background. First, we extract two feature maps, namely, an enhanced visual saliency map and an improved gradient map from an input image. After that, we integrate these two feature maps to an importance map. Finally, we generate the target image using the importance map. The proposed approach can obtain desired results for a wide range of images.


2016 ◽  
Vol 2016 ◽  
pp. 1-18 ◽  
Author(s):  
Qiangqiang Zhou ◽  
Weidong Zhao ◽  
Lin Zhang ◽  
Zhicheng Wang

Saliency detection is an important preprocessing step in many application fields such as computer vision, robotics, and graphics to reduce computational cost by focusing on significant positions and neglecting the nonsignificant in the scene. Different from most previous methods which mainly utilize the contrast of low-level features, various feature maps are fused in a simple linear weighting form. In this paper, we propose a novel salient object detection algorithm which takes both background and foreground cues into consideration and integrate a bottom-up coarse salient regions extraction and a top-down background measure via boundary labels propagation into a unified optimization framework to acquire a refined saliency detection result. Wherein the coarse saliency map is also fused by three components, the first is local contrast map which is in more accordance with the psychological law, the second is global frequency prior map, and the third is global color distribution map. During the formation of background map, first we construct an affinity matrix and select some nodes which lie on border as labels to represent the background and then carry out a propagation to generate the regional background map. The evaluation of the proposed model has been implemented on four datasets. As demonstrated in the experiments, our proposed method outperforms most existing saliency detection models with a robust performance.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Dario Zanca ◽  
Marco Gori ◽  
Stefano Melacci ◽  
Alessandra Rufa

Abstract Visual attention refers to the human brain’s ability to select relevant sensory information for preferential processing, improving performance in visual and cognitive tasks. It proceeds in two phases. One in which visual feature maps are acquired and processed in parallel. Another where the information from these maps is merged in order to select a single location to be attended for further and more complex computations and reasoning. Its computational description is challenging, especially if the temporal dynamics of the process are taken into account. Numerous methods to estimate saliency have been proposed in the last 3 decades. They achieve almost perfect performance in estimating saliency at the pixel level, but the way they generate shifts in visual attention fully depends on winner-take-all (WTA) circuitry. WTA is implemented by the biological hardware in order to select a location with maximum saliency, towards which to direct overt attention. In this paper we propose a gravitational model to describe the attentional shifts. Every single feature acts as an attractor and the shifts are the result of the joint effects of the attractors. In the current framework, the assumption of a single, centralized saliency map is no longer necessary, though still plausible. Quantitative results on two large image datasets show that this model predicts shifts more accurately than winner-take-all.


2010 ◽  
Vol 53 (1) ◽  
pp. 75-95 ◽  
Author(s):  
Jing Sun ◽  
Zhengquan Xu ◽  
Jin Liu ◽  
Ye Yao

2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Reza Eghdam ◽  
Reza Ebrahimpour ◽  
Iman Zabbah ◽  
Sajjad Zabbah

Local contrasts attract human attention to different areas of an image. Studies have shown that orientation, color, and intensity are some basic visual features which their contrasts attract our attention. Since these features are in different modalities, their contribution in the attraction of human attention is not easily comparable. In this study, we investigated the importance of these three features in the attraction of human attention in synthetic and natural images. Choosing 100% percent detectable contrast in each modality, we studied the competition between different features. Psychophysics results showed that, although single features can be detected easily in all trials, when features were presented simultaneously in a stimulus, orientation always attracts subject’s attention. In addition, computational results showed that orientation feature map is more informative about the pattern of human saccades in natural images. Finally, using optimization algorithms we quantified the impact of each feature map in construction of the final saliency map.


Author(s):  
Mara Stadler ◽  
Philipp Doebler ◽  
Barbara Mertins ◽  
Renate Delucchi Danhier

AbstractThis paper presents a model that allows group comparisons of gaze behavior while watching dynamic video stimuli. The model is based on the approach of Coutrot and Guyader (2017) and allows linear combinations of feature maps to form a master saliency map. The feature maps in the model are, for example, the dynamically salient contents of a video stimulus or predetermined areas of interest. The model takes into account temporal aspects of the stimuli, which is a crucial difference to other common models. The multi-group extension of the model introduced here allows to obtain relative importance plots, which visualize the effect of a specific feature of a stimulus on the attention and visual behavior for two or more experimental groups. These plots are interpretable summaries of data with high spatial and temporal resolution. This approach differs from many common methods for comparing gaze behavior between natural groups, which usually only include single-dimensional features such as the duration of fixation on a particular part of the stimulus. The method is illustrated by contrasting a sample of a group of persons with particularly high cognitive abilities (high achievement on IQ tests) with a control group on a psycholinguistic task on the conceptualization of motion events. In the example, we find no substantive differences in relative importance, but more exploratory gaze behavior in the highly gifted group. The code, videos, and eye-tracking data we used for this study are available online.


Author(s):  
Ping Jiang ◽  
Tao Gao

In this paper, an improved paper defects detection method based on visual attention mechanism computation model is presented. First, multi-scale feature maps are extracted by linear filtering. Second, the comparative maps are obtained by carrying out center-surround difference operator. Third, the saliency map is obtained by combining conspicuity maps, which is gained by combining the multi-scale comparative maps. Last, the seed point of watershed segmentation is determined by competition among salient points in the saliency map and the defect regions are segmented from the background. Experimental results show the efficiency of the approach for paper defects detection.


Author(s):  
Ping Jiang ◽  
Tao Gao

In this paper, an improved paper defects detection method based on visual attention mechanism computation model is presented. First, multi-scale feature maps are extracted by linear filtering. Second, the comparative maps are obtained by carrying out center-surround difference operator. Third, the saliency map is obtained by combining conspicuity maps, which is gained by combining the multi-scale comparative maps. Last, the seed point of watershed segmentation is determined by competition among salient points in the saliency map and the defect regions are segmented from the background. Experimental results show the efficiency of the approach for paper defects detection.


Sign in / Sign up

Export Citation Format

Share Document