scholarly journals EBARec-BS: Effective Band Attention Reconstruction Network for Hyperspectral Imagery Band Selection

2021 ◽  
Vol 13 (18) ◽  
pp. 3602
Author(s):  
Yufei Liu ◽  
Xiaorun Li ◽  
Ziqiang Hua ◽  
Liaoying Zhao

Hyperspectral band selection (BS) is an effective means to avoid the Hughes phenomenon and heavy computational burden in hyperspectral image processing. However, most of the existing BS methods fail to fully consider the interaction between spectral bands and cannot comprehensively consider the representativeness and redundancy of the selected band subset. To solve these problems, we propose an unsupervised effective band attention reconstruction framework for band selection (EBARec-BS) in this article. The framework utilizes the EBARec network to learn the representativeness of each band to the original band set and measures the redundancy between the bands by calculating the distance of each unselected band to the selected band subset. Subsequently, by designing an adaptive weight to balance the influence of the representativeness metric and redundancy metric on the band evaluation, a final band scoring function is obtained to select a band subset that well represents the original hyperspectral image and has low redundancy. Experiments on three well-known hyperspectral data sets indicate that compared with the existing BS methods, the proposed EBARec-BS is robust to noise bands and can effectively select the band subset with higher classification accuracy and less redundant information.

Symmetry ◽  
2020 ◽  
Vol 12 (2) ◽  
pp. 277 ◽  
Author(s):  
Laura Bilius ◽  
Stefan Pentiuc

Hyperspectral images are becoming a valuable tool much used in agriculture, mineralogy, and so on. The challenge is to successfully classify the materials founded in the field relevant for different applications. Due to a large amount of data corresponding to a big number of spectral bands, the classification programs require a long time to analyze and classify the data. The purpose is to find a better method for reducing the classification time. We exploit various algorithms on real hyperspectral data sets to find out which algorithm is more effective. This paper presents a comparison of unsupervised hyperspectral image classification such as K-means, Hierarchical clustering, and Parafac decomposition, which allows the performance of the model reduction and feature extraction. The results showed that the method useful for big data is the classification of data after Parafac Decomposition.


2019 ◽  
Vol 11 (7) ◽  
pp. 884 ◽  
Author(s):  
Li Wang ◽  
Jiangtao Peng ◽  
Weiwei Sun

Jointly using spectral and spatial information has become a mainstream strategy in the field of hyperspectral image (HSI) processing, especially for classification. However, due to the existence of noisy or correlated spectral bands in the spectral domain and inhomogeneous pixels in the spatial neighborhood, HSI classification results are often degraded and unsatisfactory. Motivated by the attention mechanism, this paper proposes a spatial–spectral squeeze-and-excitation (SSSE) module to adaptively learn the weights for different spectral bands and for different neighboring pixels. The SSSE structure can suppress or motivate features at a certain position, which can effectively resist noise interference and improve the classification results. Furthermore, we embed several SSSE modules into a residual network architecture and generate an SSSE-based residual network (SSSERN) model for HSI classification. The proposed SSSERN method is compared with several existing deep learning networks on two benchmark hyperspectral data sets. Experimental results demonstrate the effectiveness of our proposed network.


Author(s):  
Shrutika Sawant ◽  
Manoharan Prabukumar

Hyperspectral images usually contain hundreds of contiguous spectral bands, which can precisely discriminate the various spectrally similar classes. However, such high-dimensional data also contain highly correlated and irrelevant information, leading to the curse of dimensionality (also called the Hughes phenomenon). It is necessary to reduce these bands before further analysis, such as land cover classification and target detection. Band selection is an effective way to reduce the size of hyperspectral data and to overcome the curse of the dimensionality problem in ground object classification. Focusing on the classification task, this article provides an extensive and comprehensive survey on band selection techniques describing the categorisation of methods, methodology used, different searching approaches and various technical difficulties, as well as their performances. Our purpose is to highlight the progress attained in band selection techniques for hyperspectral image classification and to identify possible avenues for future work, in order to achieve better performance in real-time operation.


2021 ◽  
Vol 87 (6) ◽  
pp. 445-455
Author(s):  
Yi Ma ◽  
Zezhong Zheng ◽  
Yutang Ma ◽  
Mingcang Zhu ◽  
Ran Huang ◽  
...  

Many manifold learning algorithms conduct an eigen vector analysis on a data-similarity matrix with a size of N×N, where N is the number of data points. Thus, the memory complexity of the analysis is no less than O(N2). We pres- ent in this article an incremental manifold learning approach to handle large hyperspectral data sets for land use identification. In our method, the number of dimensions for the high-dimensional hyperspectral-image data set is obtained with the training data set. A local curvature varia- tion algorithm is utilized to sample a subset of data points as landmarks. Then a manifold skeleton is identified based on the landmarks. Our method is validated on three AVIRIS hyperspectral data sets, outperforming the comparison algorithms with a k–nearest-neighbor classifier and achieving the second best performance with support vector machine.


2019 ◽  
Vol 11 (9) ◽  
pp. 1114
Author(s):  
Sixiu Hu ◽  
Jiangtao Peng ◽  
Yingxiong Fu ◽  
Luoqing Li

By means of joint sparse representation (JSR) and kernel representation, kernel joint sparse representation (KJSR) models can effectively model the intrinsic nonlinear relations of hyperspectral data and better exploit spatial neighborhood structure to improve the classification performance of hyperspectral images. However, due to the presence of noisy or inhomogeneous pixels around the central testing pixel in the spatial domain, the performance of KJSR is greatly affected. Motivated by the idea of self-paced learning (SPL), this paper proposes a self-paced KJSR (SPKJSR) model to adaptively learn weights and sparse coefficient vectors for different neighboring pixels in the kernel-based feature space. SPL strateges can learn a weight to indicate the difficulty of feature pixels within a spatial neighborhood. By assigning small weights for unimportant or complex pixels, the negative effect of inhomogeneous or noisy neighboring pixels can be suppressed. Hence, SPKJSR is usually much more robust. Experimental results on Indian Pines and Salinas hyperspectral data sets demonstrate that SPKJSR is much more effective than traditional JSR and KJSR models.


2020 ◽  
Vol 12 (2) ◽  
pp. 297 ◽  
Author(s):  
Nasehe Jamshidpour ◽  
Abdolreza Safari ◽  
Saeid Homayouni

This paper introduces a novel multi-view multi-learner (MVML) active learning method, in which the different views are generated by a genetic algorithm (GA). The GA-based view generation method attempts to construct diverse, sufficient, and independent views by considering both inter- and intra-view confidences. Hyperspectral data inherently owns high dimensionality, which makes it suitable for multi-view learning algorithms. Furthermore, by employing multiple learners at each view, a more accurate estimation of the underlying data distribution can be obtained. We also implemented a spectral-spatial graph-based semi-supervised learning (SSL) method as the classifier, which improved the performance of the classification task in comparison with supervised learning. The evaluation of the proposed method was based on three different benchmark hyperspectral data sets. The results were also compared with other state-of-the-art AL-SSL methods. The experimental results demonstrated the efficiency and statistically significant superiority of the proposed method. The GA-MVML AL method improved the classification performances by 16.68%, 18.37%, and 15.1% for different data sets after 40 iterations.


Author(s):  
R. Kiran Kumar ◽  
B. Saichandana ◽  
K. Srinivas

<p>This paper presents genetic algorithm based band selection and classification on hyperspectral image data set. Hyperspectral remote sensors collect image data for a large number of narrow, adjacent spectral bands. Every pixel in hyperspectral image involves a continuous spectrum that is used to classify the objects with great detail and precision. In this paper, first filtering based on 2-D Empirical mode decomposition method is used to remove any noisy components in each band of the hyperspectral data. After filtering, band selection is done using genetic algorithm in-order to remove bands that convey less information. This dimensionality reduction minimizes many requirements such as storage space, computational load, communication bandwidth etc which is imposed on the unsupervised classification algorithms. Next image fusion is performed on the selected hyperspectral bands to selectively merge the maximum possible features from the selected images to form a single image. This fused image is classified using genetic algorithm. Three different indices, such as K-means Index (KMI) and Jm measure are used as objective functions. This method increases classification accuracy and performance of hyperspectral image than without dimensionality reduction.</p>


2019 ◽  
Vol 11 (3) ◽  
pp. 350 ◽  
Author(s):  
Qiang Li ◽  
Qi Wang ◽  
Xuelong Li

A hyperspectral image (HSI) has many bands, which leads to high correlation between adjacent bands, so it is necessary to find representative subsets before further analysis. To address this issue, band selection is considered as an effective approach that removes redundant bands for HSI. Recently, many band selection methods have been proposed, but the majority of them have extremely poor accuracy in a small number of bands and require multiple iterations, which does not meet the purpose of band selection. Therefore, we propose an efficient clustering method based on shared nearest neighbor (SNNC) for hyperspectral optimal band selection, claiming the following contributions: (1) the local density of each band is obtained by shared nearest neighbor, which can more accurately reflect the local distribution characteristics; (2) in order to acquire a band subset containing a large amount of information, the information entropy is taken as one of the weight factors; (3) a method for automatically selecting the optimal band subset is designed by the slope change. The experimental results reveal that compared with other methods, the proposed method has competitive computational time and the selected bands achieve higher overall classification accuracy on different data sets, especially when the number of bands is small.


2021 ◽  
Vol 13 (17) ◽  
pp. 3411
Author(s):  
Lanxue Dang ◽  
Peidong Pang ◽  
Xianyu Zuo ◽  
Yang Liu ◽  
Jay Lee

Convolutional neural network (CNN) has shown excellent performance in hyperspectral image (HSI) classification. However, the structure of the CNN models is complex, requiring many training parameters and floating-point operations (FLOPs). This is often inefficient and results in longer training and testing time. In addition, the label samples of hyperspectral data are limited, and a deep network often causes the over-fitting phenomenon. Hence, a dual-path small convolution (DPSC) module is proposed. It is composed of two 1 × 1 small convolutions with a residual path and a density path. It can effectively extract abstract features from HSI. A dual-path small convolution network (DPSCN) is constructed by stacking DPSC modules. Specifically, the proposed model uses a DPSC module to complete the extraction of spectral and spectral–spatial features successively. It then uses a global average pooling layer at the end of the model to replace the conventional fully connected layer to complete the final classification. In the implemented study, all convolutional layers of the proposed network, except the middle layer, use 1 × 1 small convolution, effectively reduced model parameters and increased the speed of feature extraction processes. DPSCN was compared with several current state-of-the-art models. The results on three benchmark HSI data sets demonstrated that the proposed model is of lower complexity, has stronger generalization ability, and has higher classification efficiency.


2019 ◽  
Vol 11 (24) ◽  
pp. 2897 ◽  
Author(s):  
Yuhui Zheng ◽  
Feiyang Wu ◽  
Hiuk Jae Shim ◽  
Le Sun

Hyperspectral unmixing is a key preprocessing technique for hyperspectral image analysis. To further improve the unmixing performance, in this paper, a nonlocal low-rank prior associated with spatial smoothness and spectral collaborative sparsity are integrated together for unmixing the hyperspectral data. The proposed method is based on a fact that hyperspectral images have self-similarity in nonlocal sense and smoothness in local sense. To explore the spatial self-similarity, nonlocal cubic patches are grouped together to compose a low-rank matrix. Then, based on the linear mixed model framework, the nuclear norm is constrained to the abundance matrix of these similar patches to enforce low-rank property. In addition, the local spatial information and spectral characteristic are also taken into account by introducing TV regularization and collaborative sparse terms, respectively. Finally, the results of the experiments on two simulated data sets and two real data sets show that the proposed algorithm produces better performance than other state-of-the-art algorithms.


Sign in / Sign up

Export Citation Format

Share Document