scholarly journals FrHPI: A Discriminative Patch-Image Model for Hyperspectral Anomaly Detection

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Hao Li ◽  
Ganghui Fan ◽  
Shan Zeng ◽  
Zhen Kang

Anomaly detection is now a significantly important part of hyperspectral image analysis to detect targets in an unsupervised manner. Traditional hyperspectral anomaly detectors fail to consider spatial information, which is vital in hyperspectral anomaly detection. Moreover, they usually take the raw data without feature extraction as input, limiting the detection performance. We propose a new anomaly detector based on the fractional Fourier transform (FrFT) and a modified patch-image model called the hyperspectral patch-image (HPI) model to tackle these two problems. By combining them, the proposed anomaly detector is named fractional hyperspectral patch-image (FrHPI) detector. Under the assumption that the target patch-image is a sparse matrix while the background patch-image is a low-rank matrix, we first formulate a matrix by sliding a rectangle window on the first three principal components (PCs) of HSI. The matrix can be decomposed into three parts representing the background, targets, and noise with the well-known low-rank and sparse matrix decomposition (LRaSMD). Then, distinctive features are extracted via FrFT, a transformation which is desirable for noise removal. Background atoms are selected to construct the covariance matrix. Finally, anomalies are picked up with Mahalanobis distance. Extensive experiments are conducted to verify the proposed FrHPI detector’s superiority in hyperspectral anomaly detection compared with other state-of-the-art detectors.

2021 ◽  
Vol 13 (19) ◽  
pp. 3954
Author(s):  
Senhao Liu ◽  
Lifu Zhang ◽  
Yi Cen ◽  
Likun Chen ◽  
Yibo Wang

To address the difficulty of separating background materials from similar materials associated with the use of “single-spectral information” for hyperspectral anomaly detection, a fast hyperspectral anomaly detection algorithm based on what we term the “greedy bilateral smoothing and extended multi-attribute profile” (GBSAED) method is proposed to improve detection precision and operation efficiency. This method utilizes “greedy bilateral smoothing” to decompose the low-rank part of a hyperspectral image (HSI) dataset and calculate spectral anomalies. This process improves the operational efficiency. Then, the extended multi-attribute profile is used to extract spatial anomalies and restrict the shape of anomalies. Finally, the two components are combined to limit false alarms and obtain appropriate detection results. This new method considers both spectral and spatial information with an improved structure that ensures operational efficiency. Using five real HSI datasets, this study demonstrates that the GBSAED method is more robust than eight representative algorithms under diverse application scenarios and greatly improves detection precision and operational efficiency.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3627 ◽  
Author(s):  
Yi Zhang ◽  
Zebin Wu ◽  
Jin Sun ◽  
Yan Zhang ◽  
Yaoqin Zhu ◽  
...  

Anomaly detection aims to separate anomalous pixels from the background, and has become an important application of remotely sensed hyperspectral image processing. Anomaly detection methods based on low-rank and sparse representation (LRASR) can accurately detect anomalous pixels. However, with the significant volume increase of hyperspectral image repositories, such techniques consume a significant amount of time (mainly due to the massive amount of matrix computations involved). In this paper, we propose a novel distributed parallel algorithm (DPA) by redesigning key operators of LRASR in terms of MapReduce model to accelerate LRASR on cloud computing architectures. Independent computation operators are explored and executed in parallel on Spark. Specifically, we reconstitute the hyperspectral images in an appropriate format for efficient DPA processing, design the optimized storage strategy, and develop a pre-merge mechanism to reduce data transmission. Besides, a repartitioning policy is also proposed to improve DPA’s efficiency. Our experimental results demonstrate that the newly developed DPA achieves very high speedups when accelerating LRASR, in addition to maintaining similar accuracies. Moreover, our proposed DPA is shown to be scalable with the number of computing nodes and capable of processing big hyperspectral images involving massive amounts of data.


IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 62120-62127 ◽  
Author(s):  
Lizhen Deng ◽  
Hu Zhu ◽  
Yujie Li ◽  
Zhen Yang

2021 ◽  
Author(s):  
Xiangyu Song ◽  
Sunil Aryal ◽  
Kai Ming Ting ◽  
zhen Liu ◽  
Bin He

Anomaly detection in hyperspectral image is affected by redundant bands and the limited utilization capacity of spectral-spatial information. In this article, we propose a novel Improved Isolation Forest (IIF) algorithm based on the assumption that anomaly pixels are more susceptible to isolation than the background pixels. The proposed IIF is a modified version of the Isolation Forest (iForest) algorithm, which addresses the poor performance of iForest in detecting local anomalies and anomaly detection in high-dimensional data. Further, we propose a spectral-spatial anomaly detector based on IIF (SSIIFD) to make full use of global and local information, as well as spectral and spatial information. To be specific, first, we apply the Gabor filter to extract spatial features, which are then employed as input to the Relative Mass Isolation Forest (ReMass-iForest) detector to obtain the spatial anomaly score. Next, original images are divided into several homogeneous regions via the Entropy Rate Segmentation (ERS) algorithm, and the preprocessed images are then employed as input to the proposed IIF detector to obtain the spectral anomaly score. Finally, we fuse the spatial and spectral anomaly scores by combining them linearly to predict anomaly pixels. The experimental results on four real hyperspectral data sets demonstrate that the proposed detector outperforms other state-of-the-art methods.


2018 ◽  
Vol 7 (4) ◽  
pp. 2309
Author(s):  
Baby Victoria.L ◽  
Sathappan S

Noise removal from the color images is the most significant and challenging task in image processing. Among different conventional filter methods, a robust Annihilating filter-based Low-rank Hankel matrix (r-ALOHA) approach was proposed as an impulse noise removal algorithm that uses the sparse and low-rank decomposition of a Hankel structured matrix to decompose the sparse impulse noise components from an original image. However, in this algorithm, the patch image was considered as it was sparse in the Fourier domain only. It requires an analysis of noise removal performance by considering the other transform domains. Hence in this article, the r-ALOHA can be extended into other transform domains such as log and exponential. In the log and exponential domain, the logarithmic and exponential functions are used for modeling the multiplicative noise model. But, this model is used only for positive outcomes. Therefore, wavelet transform domain is applied to the noise model that localizes an image pixel in both frequency and time domain simultaneously. Moreover, it separates the most vital information in a given image. Thus, it is feasible for obtaining a better approximation of the considered function using few coefficients. Finally, the experimental results show the performance effectiveness of the proposed algorithm.  


2019 ◽  
Vol 11 (24) ◽  
pp. 3028 ◽  
Author(s):  
Pei Xiang ◽  
Jiangluqi Song ◽  
Huan Li ◽  
Lin Gu ◽  
Huixin Zhou

Hyperspectral anomaly detection methods are often limited by the effects of redundant information and isolated noise. Here, a novel hyperspectral anomaly detection method based on harmonic analysis (HA) and low rank decomposition is proposed. This paper introduces three main innovations: first and foremost, in order to extract low-order harmonic images, a single-pixel-related HA was introduced to reduce dimension and remove redundant information in the original hyperspectral image (HSI). Additionally, adopting the guided filtering (GF) and differential operation, a novel background dictionary construction method was proposed to acquire the initial smoothed images suppressing some isolated noise, while simultaneously constructing a discriminative background dictionary. Last but not least, the original HSI was replaced by the initial smoothed images for a low-rank decomposition via the background dictionary. This operation took advantage of the low-rank attribute of background and the sparse attribute of anomaly. We could finally get the anomaly objectives through the sparse matrix calculated from the low-rank decomposition. The experiments compared the detection performance of the proposed method and seven state-of-the-art methods in a synthetic HSI and two real-world HSIs. Besides qualitative assessment, we also plotted the receiver operating characteristic (ROC) curve of each method and report the respective area under the curve (AUC) for quantitative comparison. Compared with the alternative methods, the experimental results illustrated the superior performance and satisfactory results of the proposed method in terms of visual characteristics, ROC curves and AUC values.


Sign in / Sign up

Export Citation Format

Share Document