scholarly journals SALIENCY-GUIDED CHANGE DETECTION OF REMOTELY SENSED IMAGES USING RANDOM FOREST

Author(s):  
W. Feng ◽  
H. Sui ◽  
X. Chen

Studies based on object-based image analysis (OBIA) representing the paradigm shift in change detection (CD) have achieved remarkable progress in the last decade. Their aim has been developing more intelligent interpretation analysis methods in the future. The prediction effect and performance stability of random forest (RF), as a new kind of machine learning algorithm, are better than many single predictors and integrated forecasting method. In this paper, we present a novel CD approach for high-resolution remote sensing images, which incorporates visual saliency and RF. First, highly homogeneous and compact image super-pixels are generated using super-pixel segmentation, and the optimal segmentation result is obtained through image superimposition and principal component analysis (PCA). Second, saliency detection is used to guide the search of interest regions in the initial difference image obtained via the improved robust change vector analysis (RCVA) algorithm. The salient regions within the difference image that correspond to the binarized saliency map are extracted, and the regions are subject to the fuzzy <i>c</i>-means (FCM) clustering to obtain the pixel-level pre-classification result, which can be used as a prerequisite for superpixel-based analysis. Third, on the basis of the optimal segmentation and pixel-level pre-classification results, different super-pixel change possibilities are calculated. Furthermore, the changed and unchanged super-pixels that serve as the training samples are automatically selected. The spectral features and Gabor features of each super-pixel are extracted. Finally, superpixel-based CD is implemented by applying RF based on these samples. Experimental results on Ziyuan 3 (ZY3) multi-spectral images show that the proposed method outperforms the compared methods in the accuracy of CD, and also confirm the feasibility and effectiveness of the proposed approach.

2021 ◽  
Vol 13 (18) ◽  
pp. 3697
Author(s):  
Liangliang Li ◽  
Hongbing Ma ◽  
Zhenhong Jia

Change detection is an important task in identifying land cover change in different periods. In synthetic aperture radar (SAR) images, the inherent speckle noise leads to false changed points, and this affects the performance of change detection. To improve the accuracy of change detection, a novel automatic SAR image change detection algorithm based on saliency detection and convolutional-wavelet neural networks is proposed. The log-ratio operator is adopted to generate the difference image, and the speckle reducing anisotropic diffusion is used to enhance the original multitemporal SAR images and the difference image. To reduce the influence of speckle noise, the salient area that probably belongs to the changed object is obtained from the difference image. The saliency analysis step can remove small noise regions by thresholding the saliency map, and interest regions can be preserved. Then an enhanced difference image is generated by combing the binarized saliency map and two input images. A hierarchical fuzzy c-means model is applied to the enhanced difference image to classify pixels into the changed, unchanged, and intermediate regions. The convolutional-wavelet neural networks are used to generate the final change map. Experimental results on five SAR data sets indicated the proposed approach provided good performance in change detection compared to state-of-the-art relative techniques, and the values of the metrics computed by the proposed method caused significant improvement.


2020 ◽  
Vol 12 (1) ◽  
pp. 152 ◽  
Author(s):  
Ting Nie ◽  
Xiyu Han ◽  
Bin He ◽  
Xiansheng Li ◽  
Hongxing Liu ◽  
...  

Ship detection in panchromatic optical remote sensing images is faced with two major challenges, locating candidate regions from complex backgrounds quickly and describing ships effectively to reduce false alarms. Here, a practical method was proposed to solve these issues. Firstly, we constructed a novel visual saliency detection method based on a hyper-complex Fourier transform of a quaternion to locate regions of interest (ROIs), which can improve the accuracy of the subsequent discrimination process for panchromatic images, compared with the phase spectrum quaternary Fourier transform (PQFT) method. In addition, the Gaussian filtering of different scales was performed on the transformed result to synthesize the best saliency map. An adaptive method based on GrabCut was then used for binary segmentation to extract candidate positions. With respect to the discrimination stage, a rotation-invariant modified local binary pattern (LBP) description was achieved by combining shape, texture, and moment invariant features to describe the ship targets more powerfully. Finally, the false alarms were eliminated through SVM training. The experimental results on panchromatic optical remote sensing images demonstrated that the presented saliency model under various indicators is superior, and the proposed ship detection method is accurate and fast with high robustness, based on detailed comparisons to existing efforts.


2020 ◽  
Vol 2020 ◽  
pp. 1-12 ◽  
Author(s):  
Yuantao Chen ◽  
Jiajun Tao ◽  
Qian Zhang ◽  
Kai Yang ◽  
Xi Chen ◽  
...  

Aiming at the problems of intensive background noise, low accuracy, and high computational complexity of the current significant object detection methods, the visual saliency detection algorithm based on Hierarchical Principal Component Analysis (HPCA) has been proposed in the paper. Firstly, the original RGB image has been converted to a grayscale image, and the original grayscale image has been divided into eight layers by the bit surface stratification technique. Each image layer contains significant object information matching the layer image features. Secondly, taking the color structure of the original image as the reference image, the grayscale image is reassigned by the grayscale color conversion method, so that the layered image not only reflects the original structural features but also effectively preserves the color feature of the original image. Thirdly, the Principal Component Analysis (PCA) has been performed on the layered image to obtain the structural difference characteristics and color difference characteristics of each layer of the image in the principal component direction. Fourthly, two features are integrated to get the saliency map with high robustness and to further refine our results; the known priors have been incorporated on image organization, which can place the subject of the photograph near the center of the image. Finally, the entropy calculation has been used to determine the optimal image from the layered saliency map; the optimal map has the least background information and most prominently saliency objects than others. The object detection results of the proposed model are closer to the ground truth and take advantages of performance parameters including precision rate (PRE), recall rate (REC), and F-measure (FME). The HPCA model’s conclusion can obviously reduce the interference of redundant information and effectively separate the saliency object from the background. At the same time, it had more improved detection accuracy than others.


Author(s):  
Jing Tian ◽  
Weiyu Yu

Visual saliency detection aims to produce saliency map of images via simulating the behavior of the human visual system (HVS). An ant-inspired approach is proposed in this chapter. The proposed approach is inspired by the ant’s behavior to find the most saliency regions in image, by depositing the pheromone information (through ant’s movements) on the image to measure its saliency. Furthermore, the ant’s movements are steered by the local phase coherence of the image. Experimental results are presented to demonstrate the superior performance of the proposed approach.


Author(s):  
Dongjing Shan ◽  
Chao Zhang

In this paper, we propose a prior fusion and feature transformation-based principal component analysis (PCA) method for saliency detection. It relies on the inner statistics of the patches in the image for identifying unique patterns, and all the processes are done only once. First, three low-level priors are incorporated and act as guidance cues in the model; second, to ensure the validity of PCA distinctness model, a linear transform for the feature space is designed and needs to be trained; furthermore, an extended optimization framework is utilized to generate a smoothed saliency map based on the consistency of the adjacent patches. We compare three versions of our model with seven previous methods and test them on several benchmark datasets. Different kinds of strategies are adopted to evaluate the performance and the results demonstrate that our model achieves the state-of-the-art performance.


Author(s):  
Ning-Min Shen ◽  
Jing Li ◽  
Pei-Yun Zhou ◽  
Ying Huo ◽  
Yi Zhuang

Co-saliency detection, an emerging research area in saliency detection, aims to extract the common saliency from the multi images. The extracted co-saliency map has been utilized in various applications, such as in co-segmentation, co-recognition and so on. With the rapid development of image acquisition technology, the original digital images are becoming more and more clearly. The existing co-saliency detection methods processing these images need enormous computer memory along with high computational complexity. These limitations made it hard to satisfy the demand of real-time user interaction. This paper proposes a fast co-saliency detection method based on the image block partition and sparse feature extraction method (BSFCoS). Firstly, the images are divided into several uniform blocks, and the low-level features are extracted from Lab and RGB color spaces. In order to maintain the characteristics of the original images and reduce the number of feature points as well as possible, Truncated Power for sparse principal components method are employed to extract sparse features. Furthermore, K-Means method is adopted to cluster the extracted sparse features, and calculate the three salient feature weights. Finally, the co-saliency map was acquired from the feature fusion of the saliency map for single image and multi images. The proposed method has been tested and simulated on two benchmark datasets: Co-saliency Pairs and CMU Cornell iCoseg datasets. Compared with the existing co-saliency methods, BSFCoS has a significant running time improvement in multi images processing while ensuring detection results. Lastly, the co-segmentation method based on BSFCoS is also given and has a better co-segmentation performance.


2021 ◽  
Vol 13 (4) ◽  
pp. 630
Author(s):  
Pengfei He ◽  
Xiangwei Zhao ◽  
Yuli Shi ◽  
Liping Cai

Unsupervised change detection(CD) from remotely sensed images is a fundamental challenge when the ground truth for supervised learning is not easily available. Inspired by the visual attention mechanism and multi-level sensation capacity of human vision, we proposed a novel multi-scale analysis framework based on multi-scale visual saliency coarse-to-fine fusion (MVSF) for unsupervised CD in this paper. As a preface of MVSF, we generalized the connotations of scale as four classes in the field of remote sensing (RS) covering the RS process from imaging to image processing, including intrinsic scale, observation scale, analysis scale and modeling scale. In MVSF, superpixels were considered as the primitives for analysing the difference image(DI) obtained by the change vector analysis method. Then, multi-scale saliency maps at the superpixel level were generated according to the global contrast of each superpixel. Finally, a weighted fusion strategy was designed to incorporate multi-scale saliency at a pixel level. The fusion weight for the pixel at each scale is adaptively obtained by considering the heterogeneity of the superpixel it belongs to and the spectral distance between the pixel and the superpixel. The experimental study was conducted on three bi-temporal remotely sensed image pairs, and the effectiveness of the proposed MVSF was verified qualitatively and quantitatively. The results suggest that it is not entirely true that finer scale brings better CD result, and fusing multi-scale superpixel based saliency at a pixel level obtained a higher F1 score in the three experiments. MVSF is capable of maintaining the detailed changed areas while resisting image noise in the final change map. Analysis of the scale factors in MVSF implied that the performance of MVSF is not sensitive to the manually selected scales in the MVSF framework.


2013 ◽  
Vol 2013 ◽  
pp. 1-9
Author(s):  
Yuantao Chen ◽  
Weihong Xu ◽  
Fangjun Kuang ◽  
Shangbing Gao

Image segmentation process for high quality visual saliency map is very dependent on the existing visual saliency metrics. It is mostly only get sketchy effect of saliency map, and roughly based visual saliency map will affect the image segmentation results. The paper had presented the randomized visual saliency detection algorithm. The randomized visual saliency detection method can quickly generate the same size as the original input image and detailed results of the saliency map. The randomized saliency detection method can be applied to real-time requirements for image content-based scaling saliency results map. The randomization method for fast randomized video saliency area detection, the algorithm only requires a small amount of memory space can be detected detailed oriented visual saliency map, the presented results are shown that the method of visual saliency map used in image after the segmentation process can be an ideal segmentation results.


DYNA ◽  
2019 ◽  
Vol 86 (209) ◽  
pp. 238-247 ◽  
Author(s):  
Esmeide Alberto Leal Narvaez ◽  
German Sanchez Torres ◽  
John William Branch Bedoya

The human visual system (HVS) can process large quantities of visual information instantly. Visual saliency perception is the process of locating and identifying regions with a high degree of saliency from a visual standpoint. Mesh saliency detection has been studied extensively in recent years, but few studies have focused on 3D point cloud saliency detection. The estimation of visual saliency is important for computer graphics tasks such as simplification, segmentation, shape matching and resizing. In this paper, we present a method for the direct detection of saliency on unorganized point clouds. First, our method computes a set of overlapping neighborhoods and estimates adescriptor vector for each point inside it. Then, the descriptor vectors are used as a natural dictionary in order to apply a sparse coding process. Finally, we estimate a saliency map of the point neighborhoods based on the Minimum Description Length (MDL) principle.Experiment results show that the proposed method achieves similar results to those from the literature review and in some cases even improves on them. It captures the geometry of the point clouds without using any topological information and achieves an acceptable performance. The effectiveness and robustness of our approach are shown by comparing it to previous studies in the literature review.


Sign in / Sign up

Export Citation Format

Share Document