scholarly journals RETRIEVAL OF COMPLEX IMAGES USING VISUAL SALIENCY GUIDED COGNITIVE CLASSIFICATION

2020 ◽  
Vol 2 (2) ◽  
pp. 102-109
Author(s):  
Dr. Vijayakumar T. ◽  
Vinothkanna R.

Data storage via multimedia technology is more preferred as the information in multimedia contain rich meanings and are concise when compared to the traditional textual information. However, efficient information retrieval is a crucial factor in such storage. This paper presents a cognitive classification based visual saliency guided model for the efficient retrieval of information from multimedia data storage. The Itti visual saliency model is described here for generation of an overall saliency map with the integration of color saliency, intensity and direction maps. Multi-feature fusion paradigms are used for providing clear description of the image pattern. The definition is based on two stages namely complexity based on cognitive load and classification of complexity at a cognitive level. The image retrieval system is finalized by integrating a group sparse logistic regression model. In complex scenarios, the baselines are overcome by the proposed system when tested on multiple databased as compared to other state-of-the-art models.

2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Xiaochun Zou ◽  
Xinbo Zhao ◽  
Yongjia Yang ◽  
Na Li

This paper brings forth a learning-based visual saliency model method for detecting diagnostic diabetic macular edema (DME) regions of interest (RoIs) in retinal image. The method introduces the cognitive process of visual selection of relevant regions that arises during an ophthalmologist’s image examination. To record the process, we collected eye-tracking data of 10 ophthalmologists on 100 images and used this database as training and testing examples. Based on analysis, two properties (Feature Property and Position Property) can be derived and combined by a simple intersection operation to obtain a saliency map. The Feature Property is implemented by support vector machine (SVM) technique using the diagnosis as supervisor; Position Property is implemented by statistical analysis of training samples. This technique is able to learn the preferences of ophthalmologist visual behavior while simultaneously considering feature uniqueness. The method was evaluated using three popular saliency model evaluation scores (AUC, EMD, and SS) and three quality measurements (classical sensitivity, specificity, and Youden’sJstatistic). The proposed method outperforms 8 state-of-the-art saliency models and 3 salient region detection approaches devised for natural images. Furthermore, our model successfully detects the DME RoIs in retinal image without sophisticated image processing such as region segmentation.


Author(s):  
Ke Zhang ◽  
Xinbo Zhao ◽  
Rong Mo

This paper presents a bioinspired visual saliency model. The end-stopping mechanism in the primary visual cortex is introduced in to extract features that represent contour information of latent salient objects such as corners, line intersections and line endpoints, which are combined together with brightness, color and orientation features to form the final saliency map. This model is an analog for the processing mechanism of visual signals along from retina, lateral geniculate nucleus(LGN)to primary visual cortex V1:Firstly, according to the characteristics of the retina and LGN, an input image is decomposed into brightness and opposite color channels; Then, the simple cell is simulated with 2D Gabor filters, and the amplitude of Gabor response is utilized to represent the response of complex cell; Finally, the response of an end-stopped cell is obtained by multiplying the response of two complex cells with different orientation, and outputs of V1 and LGN constitute a bottom-up saliency map. Experimental results on public datasets show that our model can accurately predict human fixations, and the performance achieves the state of the art of bottom-up saliency model.


2021 ◽  
Vol 8 (1) ◽  
pp. 110-116
Author(s):  
Dannina Kishore ◽  
Chanamallu Srinivasa Rao

In the last few years, Content-Based Image Retrieval (CBIR) has received wide attention. Compared to text-based image retrieval contents of the image are more in information for efficient retrieval by Content-Based Image Retrieval. The single feature cannot be applied to all the images and provides lower performance. In this paper, we have put forward a proposal on an image retrieval using multi-feature fusion. The concept of multi-resolution has been exploited with the help of a wavelet transform. This method combines Local Binary Pattern (LBP) with Fast and Accurate Exponent Fourier Moments (FAEFM’s) with the wavelet decomposition of an image using multiple resolutions. In order to extract the feature of texture from image, LBP codes of Discrete Wavelet Transform (DWT), the image coefficients are estimated followed by the computation of Fast and Accurate Exponent Fourier Moments to these LBP codes so as to extract features of shape to construct the required feature vector. These constructed vectors aid us in exactly finding out and retrieving visually similar images from existing databases. The benchmark databases Corel-1k and Olivia 2688 are used to test the proposed method. The proposed method achieves 99.99% of precision and 93.15% of recall on Corel-1k database and 99.99% of precision and recall of 93.63% on Olivia-2688 database, which are higher than the existing methods.


2014 ◽  
Vol 1044-1045 ◽  
pp. 1049-1052 ◽  
Author(s):  
Chin Chen Chang ◽  
I Ta Lee ◽  
Tsung Ta Ke ◽  
Wen Kai Tai

Common methods for reducing image size include scaling and cropping. However, these two approaches have some quality problems for reduced images. In this paper, we propose an image reducing algorithm by separating the main objects and the background. First, we extract two feature maps, namely, an enhanced visual saliency map and an improved gradient map from an input image. After that, we integrate these two feature maps to an importance map. Finally, we generate the target image using the importance map. The proposed approach can obtain desired results for a wide range of images.


2013 ◽  
Vol 411-414 ◽  
pp. 1362-1367 ◽  
Author(s):  
Qing Lan Wei ◽  
Yuan Zhang

This paper presents the thoughts about application of saliency map to the video objective quality evaluation system. It computes the SMSE and SPSNR values as the objective assessment scores according to the saliency map, and compares with conditional objective evaluation methods as PSNR and MSE. Experimental results demonstrate that this method can well fit the subjective assessment results.


2020 ◽  
Vol 12 (1) ◽  
pp. 152 ◽  
Author(s):  
Ting Nie ◽  
Xiyu Han ◽  
Bin He ◽  
Xiansheng Li ◽  
Hongxing Liu ◽  
...  

Ship detection in panchromatic optical remote sensing images is faced with two major challenges, locating candidate regions from complex backgrounds quickly and describing ships effectively to reduce false alarms. Here, a practical method was proposed to solve these issues. Firstly, we constructed a novel visual saliency detection method based on a hyper-complex Fourier transform of a quaternion to locate regions of interest (ROIs), which can improve the accuracy of the subsequent discrimination process for panchromatic images, compared with the phase spectrum quaternary Fourier transform (PQFT) method. In addition, the Gaussian filtering of different scales was performed on the transformed result to synthesize the best saliency map. An adaptive method based on GrabCut was then used for binary segmentation to extract candidate positions. With respect to the discrimination stage, a rotation-invariant modified local binary pattern (LBP) description was achieved by combining shape, texture, and moment invariant features to describe the ship targets more powerfully. Finally, the false alarms were eliminated through SVM training. The experimental results on panchromatic optical remote sensing images demonstrated that the presented saliency model under various indicators is superior, and the proposed ship detection method is accurate and fast with high robustness, based on detailed comparisons to existing efforts.


Sign in / Sign up

Export Citation Format

Share Document