scholarly journals Spectral and Spatial Cloud Detection Onboard for Hyperspectral Remote Sensing Image

Author(s):  
Haoyang Li ◽  
Hong Zheng ◽  
Chuanzhao Han ◽  
Haibo Wang ◽  
Min Miao

It is strongly desirable to accurately detect the clouds in hyperspectral images onboard before compression. However, conventional onboard cloud detection methods are not appropriate to all situation such as shadowed cloud or darken snow covered surfaces which are not identified properly in the NDSI test. In this paper, we propose a new spectral–spatial classification strategy to enhance the orbiting cloud screen performances obtained on hyperspectral images by integrating threshold exponential spectral angle map (TESAM), adaptive Markov random field (aMRF) and dynamic stochastic resonance (DSR). TESAM is performed to classify the cloud pixels coarsely based on spectral information. Then aMRF is performed to do optimal process by using spatial information, which improved the classification performance significantly. Some misclassification points still exist after aMRF processing because of the noisy data in the onboard environment. DSR is used to eliminate misclassification points in binary labeling image after aMRF. Taking level 0.5 data from hyperion as dataset, the average overall accuracy of the proposed algorithm is 96.28% after test. The method can provide cloud mask for the on-going EO-1 images and related satellites with the same spectral settings without manual intervention. The experiment indicate that the proposed method reveals better performance than the classical onboard cloud detection or current state-of-the-art hyperspectral classification methods.

Author(s):  
Haoyang Li ◽  
Hong Zheng ◽  
Chuanzhao Han ◽  
Haibo Wang ◽  
Min Miao

It is strongly desirable to accurately detect the clouds in hyperspectral images onboard before compression. However, conventional onboard cloud detection methods are not appropriate to all situation such as shadowed cloud or darken snow covered surfaces which are not identified properly in the NDSI test. In this paper, we propose a new spectral–spatial classification strategy to enhance the orbiting cloud screen performances obtained on hyperspectral images by integrating threshold assisted exponential spectral angle map (TESAM), adaptive Markov random fields (aMRFs) and dynamic stochastic resonance (DSR). First, TESAM is performed to classify the cloud pixels coarsely based on spectral information. Then aMRFs is performed to do optimal process by using spatial information, which improved the classification performance significantly. But some misclassification points still exist after aMRFs processing because of the noise of data in the onboard environment. DSR is used to eliminate misclassification points in binary labeling image after aMRFs. Taking level 0.5 data from hyperion as dataset, the average overall accuracy of the proposed algorithm is 96.28% after test. The method can provide an accurate cloud mask for the on-going EO-1 images and the similar satellites with the same spectral settings without manual intervention. The experiment indicate that the proposed method reveals better performance than the classical onboard cloud detection or current advanced hyperspectral classification methods.


2009 ◽  
Vol 48 (2) ◽  
pp. 301-316 ◽  
Author(s):  
M. Reuter ◽  
W. Thomas ◽  
P. Albert ◽  
M. Lockhoff ◽  
R. Weber ◽  
...  

Abstract The Satellite Application Facility on Climate Monitoring (CM-SAF) is aiming to retrieve satellite-derived geophysical parameters suitable for climate monitoring. CM-SAF started routine operations in early 2007 and provides a climatology of parameters describing the global energy and water cycle on a regional scale and partially on a global scale. Here, the authors focus on the performance of cloud detection methods applied to measurements of the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on the first Meteosat Second Generation geostationary spacecraft. The retrieved cloud mask is the basis for calculating the cloud fractional coverage (CFC) but is also mandatory for retrieving other geophysical parameters. Therefore, the quality of the cloud detection directly influences climate monitoring of many other parameters derived from spaceborne sensors. CM-SAF products and results of an alternative cloud coverage retrieval provided by the Institut für Weltraumwissenschaften of the Freie Universität in Berlin, Germany (FUB), were validated against synoptic measurements. Furthermore, and on the basis of case studies, an initial comparison was performed of CM-SAF results with results derived from the Moderate Resolution Imaging Spectrometer (MODIS) and from the Cloud–Aerosol Lidar with Orthogonal Polarization (CALIOP). Results show that the CFC from CM-SAF and FUB agrees well with synoptic data and MODIS data over midlatitudes but is underestimated over the tropics and overestimated toward the edges of the visible Earth disk.


2021 ◽  
Vol 13 (12) ◽  
pp. 2268
Author(s):  
Hang Gong ◽  
Qiuxia Li ◽  
Chunlai Li ◽  
Haishan Dai ◽  
Zhiping He ◽  
...  

Hyperspectral images are widely used for classification due to its rich spectral information along with spatial information. To process the high dimensionality and high nonlinearity of hyperspectral images, deep learning methods based on convolutional neural network (CNN) are widely used in hyperspectral classification applications. However, most CNN structures are stacked vertically in addition to using a onefold size of convolutional kernels or pooling layers, which cannot fully mine the multiscale information on the hyperspectral images. When such networks meet the practical challenge of a limited labeled hyperspectral image dataset—i.e., “small sample problem”—the classification accuracy and generalization ability would be limited. In this paper, to tackle the small sample problem, we apply the semantic segmentation function to the pixel-level hyperspectral classification due to their comparability. A lightweight, multiscale squeeze-and-excitation pyramid pooling network (MSPN) is proposed. It consists of a multiscale 3D CNN module, a squeezing and excitation module, and a pyramid pooling module with 2D CNN. Such a hybrid 2D-3D-CNN MSPN framework can learn and fuse deeper hierarchical spatial–spectral features with fewer training samples. The proposed MSPN was tested on three publicly available hyperspectral classification datasets: Indian Pine, Salinas, and Pavia University. Using 5%, 0.5%, and 0.5% training samples of the three datasets, the classification accuracies of the MSPN were 96.09%, 97%, and 96.56%, respectively. In addition, we also selected the latest dataset with higher spatial resolution, named WHU-Hi-LongKou, as the challenge object. Using only 0.1% of the training samples, we could achieve a 97.31% classification accuracy, which is far superior to the state-of-the-art hyperspectral classification methods.


2021 ◽  
Vol 13 (22) ◽  
pp. 4533
Author(s):  
Kai Hu ◽  
Dongsheng Zhang ◽  
Min Xia

Cloud detection is a key step in the preprocessing of optical satellite remote sensing images. In the existing literature, cloud detection methods are roughly divided into threshold methods and deep-learning methods. Most of the traditional threshold methods are based on the spectral characteristics of clouds, so it is easy to lose the spatial location information in the high-reflection area, resulting in misclassification. Besides, due to the lack of generalization, the traditional deep-learning network also easily loses the details and spatial information if it is directly applied to cloud detection. In order to solve these problems, we propose a deep-learning model, Cloud Detection UNet (CDUNet), for cloud detection. The characteristics of the network are that it can refine the division boundary of the cloud layer and capture its spatial position information. In the proposed model, we introduced a High-frequency Feature Extractor (HFE) and a Multiscale Convolution (MSC) to refine the cloud boundary and predict fragmented clouds. Moreover, in order to improve the accuracy of thin cloud detection, the Spatial Prior Self-Attention (SPSA) mechanism was introduced to establish the cloud spatial position information. Additionally, a dual-attention mechanism is proposed to reduce the proportion of redundant information in the model and improve the overall performance of the model. The experimental results showed that our model can cope with complex cloud cover scenes and has excellent performance on cloud datasets and SPARCS datasets. Its segmentation accuracy is better than the existing methods, which is of great significance for cloud-detection-related work.


Author(s):  
L. L. Jia ◽  
X. Q. Wang

Identification of clouds in optical images is often a necessary step toward their use. However, aimed at the cloud detection methods used on GF-1 is relatively less. In order to meet the requirement of accurate cloud detection in GF-1 WFV imagery, a new method based on the combination of band operation and spatial texture feature (BOTF) is proposed in this paper. First of all, the BOTF algorithm minimize interference of highlight surface and cloud regions by the band operation, and then distinguish between cloud area and non-cloud area with spatial texture feature. Finally, the cloud mask can be acquired by threshold segmentation method. The method was validated using scenes. The results indicate that the BOTF performs well under normal conditions, and the average overall accuracy of BOTF cloud detection is better than 90 %. The proposed method can meet the needs of routine work.


2021 ◽  
Vol 13 (2) ◽  
pp. 268
Author(s):  
Xiaochen Lv ◽  
Wenhong Wang ◽  
Hongfu Liu

Hyperspectral unmixing is an important technique for analyzing remote sensing images which aims to obtain a collection of endmembers and their corresponding abundances. In recent years, non-negative matrix factorization (NMF) has received extensive attention due to its good adaptability for mixed data with different degrees. The majority of existing NMF-based unmixing methods are developed by incorporating additional constraints into the standard NMF based on the spectral and spatial information of hyperspectral images. However, they neglect to exploit the nature of imbalanced pixels included in the data, which may cause the pixels mixed with imbalanced endmembers to be ignored, and thus the imbalanced endmembers generally cannot be accurately estimated due to the statistical property of NMF. To exploit the information of imbalanced samples in hyperspectral data during the unmixing procedure, in this paper, a cluster-wise weighted NMF (CW-NMF) method for the unmixing of hyperspectral images with imbalanced data is proposed. Specifically, based on the result of clustering conducted on the hyperspectral image, we construct a weight matrix and introduce it into the model of standard NMF. The proposed weight matrix can provide an appropriate weight value to the reconstruction error between each original pixel and the reconstructed pixel in the unmixing procedure. In this way, the adverse effect of imbalanced samples on the statistical accuracy of NMF is expected to be reduced by assigning larger weight values to the pixels concerning imbalanced endmembers and giving smaller weight values to the pixels mixed by majority endmembers. Besides, we extend the proposed CW-NMF by introducing the sparsity constraints of abundance and graph-based regularization, respectively. The experimental results on both synthetic and real hyperspectral data have been reported, and the effectiveness of our proposed methods has been demonstrated by comparing them with several state-of-the-art methods.


2021 ◽  
Vol 13 (8) ◽  
pp. 1602
Author(s):  
Qiaoqiao Sun ◽  
Xuefeng Liu ◽  
Salah Bourennane

Deep learning models have strong abilities in learning features and they have been successfully applied in hyperspectral images (HSIs). However, the training of most deep learning models requires labeled samples and the collection of labeled samples are labor-consuming in HSI. In addition, single-level features from a single layer are usually considered, which may result in the loss of some important information. Using multiple networks to obtain multi-level features is a solution, but at the cost of longer training time and computational complexity. To solve these problems, a novel unsupervised multi-level feature extraction framework that is based on a three dimensional convolutional autoencoder (3D-CAE) is proposed in this paper. The designed 3D-CAE is stacked by fully 3D convolutional layers and 3D deconvolutional layers, which allows for the spectral-spatial information of targets to be mined simultaneously. Besides, the 3D-CAE can be trained in an unsupervised way without involving labeled samples. Moreover, the multi-level features are directly obtained from the encoded layers with different scales and resolutions, which is more efficient than using multiple networks to get them. The effectiveness of the proposed multi-level features is verified on two hyperspectral data sets. The results demonstrate that the proposed method has great promise in unsupervised feature learning and can help us to further improve the hyperspectral classification when compared with single-level features.


2021 ◽  
Vol 13 (4) ◽  
pp. 547
Author(s):  
Wenning Wang ◽  
Xuebin Liu ◽  
Xuanqin Mou

For both traditional classification and current popular deep learning methods, the limited sample classification problem is very challenging, and the lack of samples is an important factor affecting the classification performance. Our work includes two aspects. First, the unsupervised data augmentation for all hyperspectral samples not only improves the classification accuracy greatly with the newly added training samples, but also further improves the classification accuracy of the classifier by optimizing the augmented test samples. Second, an effective spectral structure extraction method is designed, and the effective spectral structure features have a better classification accuracy than the true spectral features.


2019 ◽  
Vol 1077 ◽  
pp. 116-128 ◽  
Author(s):  
Ana Herrero-Langreo ◽  
Nathalie Gorretta ◽  
Bruno Tisseyre ◽  
Aoife Gowen ◽  
Jun-Li Xu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document