An Object-Based Image Reducing Approach

2014 ◽  
Vol 1044-1045 ◽  
pp. 1049-1052 ◽  
Author(s):  
Chin Chen Chang ◽  
I Ta Lee ◽  
Tsung Ta Ke ◽  
Wen Kai Tai

Common methods for reducing image size include scaling and cropping. However, these two approaches have some quality problems for reduced images. In this paper, we propose an image reducing algorithm by separating the main objects and the background. First, we extract two feature maps, namely, an enhanced visual saliency map and an improved gradient map from an input image. After that, we integrate these two feature maps to an importance map. Finally, we generate the target image using the importance map. The proposed approach can obtain desired results for a wide range of images.

2013 ◽  
Vol 2013 ◽  
pp. 1-9
Author(s):  
Yuantao Chen ◽  
Weihong Xu ◽  
Fangjun Kuang ◽  
Shangbing Gao

Image segmentation process for high quality visual saliency map is very dependent on the existing visual saliency metrics. It is mostly only get sketchy effect of saliency map, and roughly based visual saliency map will affect the image segmentation results. The paper had presented the randomized visual saliency detection algorithm. The randomized visual saliency detection method can quickly generate the same size as the original input image and detailed results of the saliency map. The randomized saliency detection method can be applied to real-time requirements for image content-based scaling saliency results map. The randomization method for fast randomized video saliency area detection, the algorithm only requires a small amount of memory space can be detected detailed oriented visual saliency map, the presented results are shown that the method of visual saliency map used in image after the segmentation process can be an ideal segmentation results.


Author(s):  
Ke Zhang ◽  
Xinbo Zhao ◽  
Rong Mo

This paper presents a bioinspired visual saliency model. The end-stopping mechanism in the primary visual cortex is introduced in to extract features that represent contour information of latent salient objects such as corners, line intersections and line endpoints, which are combined together with brightness, color and orientation features to form the final saliency map. This model is an analog for the processing mechanism of visual signals along from retina, lateral geniculate nucleus(LGN)to primary visual cortex V1:Firstly, according to the characteristics of the retina and LGN, an input image is decomposed into brightness and opposite color channels; Then, the simple cell is simulated with 2D Gabor filters, and the amplitude of Gabor response is utilized to represent the response of complex cell; Finally, the response of an end-stopped cell is obtained by multiplying the response of two complex cells with different orientation, and outputs of V1 and LGN constitute a bottom-up saliency map. Experimental results on public datasets show that our model can accurately predict human fixations, and the performance achieves the state of the art of bottom-up saliency model.


Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 348
Author(s):  
Choongsang Cho ◽  
Young Han Lee ◽  
Jongyoul Park ◽  
Sangkeun Lee

Semantic image segmentation has a wide range of applications. When it comes to medical image segmentation, its accuracy is even more important than those of other areas because the performance gives useful information directly applicable to disease diagnosis, surgical planning, and history monitoring. The state-of-the-art models in medical image segmentation are variants of encoder-decoder architecture, which is called U-Net. To effectively reflect the spatial features in feature maps in encoder-decoder architecture, we propose a spatially adaptive weighting scheme for medical image segmentation. Specifically, the spatial feature is estimated from the feature maps, and the learned weighting parameters are obtained from the computed map, since segmentation results are predicted from the feature map through a convolutional layer. Especially in the proposed networks, the convolutional block for extracting the feature map is replaced with the widely used convolutional frameworks: VGG, ResNet, and Bottleneck Resent structures. In addition, a bilinear up-sampling method replaces the up-convolutional layer to increase the resolution of the feature map. For the performance evaluation of the proposed architecture, we used three data sets covering different medical imaging modalities. Experimental results show that the network with the proposed self-spatial adaptive weighting block based on the ResNet framework gave the highest IoU and DICE scores in the three tasks compared to other methods. In particular, the segmentation network combining the proposed self-spatially adaptive block and ResNet framework recorded the highest 3.01% and 2.89% improvements in IoU and DICE scores, respectively, in the Nerve data set. Therefore, we believe that the proposed scheme can be a useful tool for image segmentation tasks based on the encoder-decoder architecture.


Entropy ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. 421
Author(s):  
Dariusz Puchala ◽  
Kamil Stokfiszewski ◽  
Mykhaylo Yatsymirskyy

In this paper, the authors analyze in more details an image encryption scheme, proposed by the authors in their earlier work, which preserves input image statistics and can be used in connection with the JPEG compression standard. The image encryption process takes advantage of fast linear transforms parametrized with private keys and is carried out prior to the compression stage in a way that does not alter those statistical characteristics of the input image that are crucial from the point of view of the subsequent compression. This feature makes the encryption process transparent to the compression stage and enables the JPEG algorithm to maintain its full compression capabilities even though it operates on the encrypted image data. The main advantage of the considered approach is the fact that the JPEG algorithm can be used without any modifications as a part of the encrypt-then-compress image processing framework. The paper includes a detailed mathematical model of the examined scheme allowing for theoretical analysis of the impact of the image encryption step on the effectiveness of the compression process. The combinatorial and statistical analysis of the encryption process is also included and it allows to evaluate its cryptographic strength. In addition, the paper considers several practical use-case scenarios with different characteristics of the compression and encryption stages. The final part of the paper contains the additional results of the experimental studies regarding general effectiveness of the presented scheme. The results show that for a wide range of compression ratios the considered scheme performs comparably to the JPEG algorithm alone, that is, without the encryption stage, in terms of the quality measures of reconstructed images. Moreover, the results of statistical analysis as well as those obtained with generally approved quality measures of image cryptographic systems, prove high strength and efficiency of the scheme’s encryption stage.


Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 991
Author(s):  
Yuta Nakahara ◽  
Toshiyasu Matsushima

In information theory, lossless compression of general data is based on an explicit assumption of a stochastic generative model on target data. However, in lossless image compression, researchers have mainly focused on the coding procedure that outputs the coded sequence from the input image, and the assumption of the stochastic generative model is implicit. In these studies, there is a difficulty in discussing the difference between the expected code length and the entropy of the stochastic generative model. We solve this difficulty for a class of images, in which they have non-stationarity among segments. In this paper, we propose a novel stochastic generative model of images by redefining the implicit stochastic generative model in a previous coding procedure. Our model is based on the quadtree so that it effectively represents the variable block size segmentation of images. Then, we construct the Bayes code optimal for the proposed stochastic generative model. It requires the summation of all possible quadtrees weighted by their posterior. In general, its computational cost increases exponentially for the image size. However, we introduce an efficient algorithm to calculate it in the polynomial order of the image size without loss of optimality. As a result, the derived algorithm has a better average coding rate than that of JBIG.


2021 ◽  
Vol 13 (11) ◽  
pp. 2171
Author(s):  
Yuhao Qing ◽  
Wenyi Liu ◽  
Liuyan Feng ◽  
Wanjia Gao

Despite significant progress in object detection tasks, remote sensing image target detection is still challenging owing to complex backgrounds, large differences in target sizes, and uneven distribution of rotating objects. In this study, we consider model accuracy, inference speed, and detection of objects at any angle. We also propose a RepVGG-YOLO network using an improved RepVGG model as the backbone feature extraction network, which performs the initial feature extraction from the input image and considers network training accuracy and inference speed. We use an improved feature pyramid network (FPN) and path aggregation network (PANet) to reprocess feature output by the backbone network. The FPN and PANet module integrates feature maps of different layers, combines context information on multiple scales, accumulates multiple features, and strengthens feature information extraction. Finally, to maximize the detection accuracy of objects of all sizes, we use four target detection scales at the network output to enhance feature extraction from small remote sensing target pixels. To solve the angle problem of any object, we improved the loss function for classification using circular smooth label technology, turning the angle regression problem into a classification problem, and increasing the detection accuracy of objects at any angle. We conducted experiments on two public datasets, DOTA and HRSC2016. Our results show the proposed method performs better than previous methods.


2020 ◽  
Vol 12 (6) ◽  
pp. 961 ◽  
Author(s):  
Marinalva Dias Soares ◽  
Luciano Vieira Dutra ◽  
Gilson Alexandre Ostwald Pedro da Costa ◽  
Raul Queiroz Feitosa ◽  
Rogério Galante Negri ◽  
...  

Per-point classification is a traditional method for remote sensing data classification, and for radar data in particular. Compared with optical data, the discriminative power of radar data is quite limited, for most applications. A way of trying to overcome these difficulties is to use Region-Based Classification (RBC), also referred to as Geographical Object-Based Image Analysis (GEOBIA). RBC methods first aggregate pixels into homogeneous objects, or regions, using a segmentation procedure. Moreover, segmentation is known to be an ill-conditioned problem because it admits multiple solutions, and a small change in the input image, or segmentation parameters, may lead to significant changes in the image partitioning. In this context, this paper proposes and evaluates novel approaches for SAR data classification, which rely on specialized segmentations, and on the combination of partial maps produced by classification ensembles. Such approaches comprise a meta-methodology, in the sense that they are independent from segmentation and classification algorithms, and optimization procedures. Results are shown that improve the classification accuracy from Kappa = 0.4 (baseline method) to a Kappa = 0.77 with the presented method. Another test site presented an improvement from Kappa = 0.36 to a maximum of 0.66 also with radar data.


2020 ◽  
Vol 12 (11) ◽  
pp. 1772
Author(s):  
Brian Alan Johnson ◽  
Lei Ma

Image segmentation and geographic object-based image analysis (GEOBIA) were proposed around the turn of the century as a means to analyze high-spatial-resolution remote sensing images. Since then, object-based approaches have been used to analyze a wide range of images for numerous applications. In this Editorial, we present some highlights of image segmentation and GEOBIA research from the last two years (2018–2019), including a Special Issue published in the journal Remote Sensing. As a final contribution of this special issue, we have shared the views of 45 other researchers (corresponding authors of published papers on GEOBIA in 2018–2019) on the current state and future priorities of this field, gathered through an online survey. Most researchers surveyed acknowledged that image segmentation/GEOBIA approaches have achieved a high level of maturity, although the need for more free user-friendly software and tools, further automation, better integration with new machine-learning approaches (including deep learning), and more suitable accuracy assessment methods was frequently pointed out.


2013 ◽  
Vol 765-767 ◽  
pp. 1401-1405
Author(s):  
Chi Zhang ◽  
Wei Qiang Wang

Object-level saliency detection is an important branch of visual saliency. In this paper, we propose a novel method which can conduct object-level saliency detection in both images and videos in a unified way. We employ a more effective spatial compactness assumption to measure saliency instead of the popular contrast assumption. In addition, we present a combination framework which integrates multiple saliency maps generated in different feature maps. The proposed algorithm can automatically select saliency maps of high quality according to the quality evaluation score we define. The experimental results demonstrate that the proposed method outperforms all state-of-the-art methods on both of the datasets of still images and video sequences.


2013 ◽  
Vol 411-414 ◽  
pp. 1362-1367 ◽  
Author(s):  
Qing Lan Wei ◽  
Yuan Zhang

This paper presents the thoughts about application of saliency map to the video objective quality evaluation system. It computes the SMSE and SPSNR values as the objective assessment scores according to the saliency map, and compares with conditional objective evaluation methods as PSNR and MSE. Experimental results demonstrate that this method can well fit the subjective assessment results.


Sign in / Sign up

Export Citation Format

Share Document