Hyperspectral Image Classification Using Spatial and Edge Features Based on Deep Learning

Author(s):  
Dexiang Zhang ◽  
Jingzhong Kang ◽  
Lina Xun ◽  
Yu Huang

In recent years, deep learning has been widely used in the classification of hyperspectral images and good results have been achieved. But it is easy to ignore the edge information of the image when using the spatial features of hyperspectral images to carry out the classification experiments. In order to make full use of the advantages of convolution neural network (CNN), we extract the spatial information with the method of minimum noise fraction (MNF) and the edge information by bilateral filter. The combination of the two kinds of information not only increases the useful information but also effectively removes part of the noise. The convolution neural network is used to extract features and classify for hyperspectral images on the basis of this fused information. In addition, this paper also uses another kind of edge-filtering method to amend the final classification results for a better accuracy. The proposed method was tested on three public available data sets: the University of Pavia, the Salinas, and the Indian Pines. The competitive results indicate that our approach can realize a classification of different ground targets with a very high accuracy.

TecnoLógicas ◽  
2019 ◽  
Vol 22 (46) ◽  
pp. 1-14 ◽  
Author(s):  
Jorge Luis Bacca ◽  
Henry Arguello

Spectral image clustering is an unsupervised classification method which identifies distributions of pixels using spectral information without requiring a previous training stage. The sparse subspace clustering-based methods (SSC) assume that hyperspectral images lie in the union of multiple low-dimensional subspaces.  Using this, SSC groups spectral signatures in different subspaces, expressing each spectral signature as a sparse linear combination of all pixels, ensuring that the non-zero elements belong to the same class. Although these methods have shown good accuracy for unsupervised classification of hyperspectral images, the computational complexity becomes intractable as the number of pixels increases, i.e. when the spatial dimension of the image is large. For this reason, this paper proposes to reduce the number of pixels to be classified in the hyperspectral image, and later, the clustering results for the missing pixels are obtained by exploiting the spatial information. Specifically, this work proposes two methodologies to remove the pixels, the first one is based on spatial blue noise distribution which reduces the probability to remove cluster of neighboring pixels, and the second is a sub-sampling procedure that eliminates every two contiguous pixels, preserving the spatial structure of the scene. The performance of the proposed spectral image clustering framework is evaluated in three datasets showing that a similar accuracy is obtained when up to 50% of the pixels are removed, in addition, it is up to 7.9 times faster compared to the classification of the data sets without incomplete pixels.


A rapid dissemination of Android operating system in smart phone market has resulted in an exponential growth of threats to mobile applications. Various studies have been carried out in academia and industry for the identification and classification of malicious applications using machine learning and deep learning algorithms. Convolution Neural Network is a deep learning technique which has gained popularity in speech and image recognition. The conventional solution for identifying Android malware needs learning based on pre-extracted features to preserve high performance for detecting Android malware. In order to reduce the efforts and domain expertise involved in hand-feature engineering, we have generated the grayscale images of AndroidManifest.xml and classes.dex files which are extracted from the Android package and applied Convolution Neural Network for classifying the images. The experiments are conducted on a recent dataset of 1747 malicious Android applications. The results indicate that classes.dex file gives better results as compared to the AndroidManifest.xml and also demonstrate that model performs better as the image become larger.


Author(s):  
D.A Janeera ◽  
P. Amudhavalli ◽  
P Sherubha ◽  
S.P Sasirekha ◽  
P. Anantha Christu Raj ◽  
...  

2020 ◽  
Vol 12 (6) ◽  
pp. 923 ◽  
Author(s):  
Kuiliang Gao ◽  
Bing Liu ◽  
Xuchu Yu ◽  
Jinchun Qin ◽  
Pengqiang Zhang ◽  
...  

Deep learning has achieved great success in hyperspectral image classification. However, when processing new hyperspectral images, the existing deep learning models must be retrained from scratch with sufficient samples, which is inefficient and undesirable in practical tasks. This paper aims to explore how to accurately classify new hyperspectral images with only a few labeled samples, i.e., the hyperspectral images few-shot classification. Specifically, we design a new deep classification model based on relational network and train it with the idea of meta-learning. Firstly, the feature learning module and the relation learning module of the model can make full use of the spatial–spectral information in hyperspectral images and carry out relation learning by comparing the similarity between samples. Secondly, the task-based learning strategy can enable the model to continuously enhance its ability to learn how to learn with a large number of tasks randomly generated from different data sets. Benefitting from the above two points, the proposed method has excellent generalization ability and can obtain satisfactory classification results with only a few labeled samples. In order to verify the performance of the proposed method, experiments were carried out on three public data sets. The results indicate that the proposed method can achieve better classification results than the traditional semisupervised support vector machine and semisupervised deep learning models.


2020 ◽  
Vol 38 (15_suppl) ◽  
pp. e16097-e16097
Author(s):  
Andrew J. Kruger ◽  
Lingdao Sha ◽  
Madhavi Kannan ◽  
Rohan P. Joshi ◽  
Benjamin D. Leibowitz ◽  
...  

e16097 Background: Using gene-expression, consensus molecular subtypes (CMS) divide colorectal cancers (CRC) into four categories with prognostic and therapy-predictive clinical utilities. These subtypes also manifest as different morphological phenotypes in whole-slide images (WSIs). Here, we implemented and trained a novel deep multiple instance learning (MIL) framework that requires only a single label per WSI to identify morphological biomarkers and accelerate CMS classification. Methods: Deep learning models can be trained by MIL frameworks to classify tissue in localized tiles from large ( > 1 Gb) WSIs using only weakly supervised, slide-level classification labels. Here we demonstrate a novel framework that advances on instance-based MIL by using a multi-phase approach to training deep learning models. The framework allows us to train on WSIs that contain multiple CMS classes while further identifying previously undiscovered tissue features that have low or no correlation with any subtype. Identification of these uncorrelated features results in improved insights into the specific tissue features that are most associated with the four CMS classes and a more accurate classification of CMS status. Results: We trained and validated (n = 735 WSIs and 184 withheld WSIs, respectively) a ResNet34 convolutional neural network to classify 224x224 pixel tiles distributed across tumor, lymphocyte, and stroma tissue regions. The slide-level CMS classification probability was calculated by an aggregation of the tiles correlated with each one of the four subtypes. The receiver operating characteristic curves had the following one-vs-all AUCs: CMS1 = 0.854, CMS2 = 0.921, CMS3 = 0.850, and CMS4 = 0.866, resulting in an average AUC of 0.873. Initial tests to generalize to other data sets, such as TCGA, are promising and constitute one of the future directions of this work. Conclusions: The MIL framework robustly identified tissue features correlated with CMS groups, allowing for a more efficient classification of CRC samples. We also demonstrated that the morphological features indicative of different molecular subtypes can be identified from the deep neural network.


2021 ◽  
Vol 13 (9) ◽  
pp. 1732
Author(s):  
Hadis Madani ◽  
Kenneth McIsaac

Pixel-wise classification of hyperspectral images (HSIs) from remote sensing data is a common approach for extracting information about scenes. In recent years, approaches based on deep learning techniques have gained wide applicability. An HSI dataset can be viewed either as a collection of images, each one captured at a different wavelength, or as a collection of spectra, each one associated with a specific point (pixel). Enhanced classification accuracy is enabled if the spectral and spatial information are combined in the input vector. This allows simultaneous classification according to spectral type but also according to geometric relationships. In this study, we proposed a novel spatial feature vector which improves accuracies in pixel-wise classification. Our proposed feature vector is based on the distance transform of the pixels with respect to the dominant edges in the input HSI. In other words, we allow the location of pixels within geometric subdivisions of the dataset to modify the contribution of each pixel to the spatial feature vector. Moreover, we used the extended multi attribute profile (EMAP) features to add more geometric features to the proposed spatial feature vector. We have performed experiments with three hyperspectral datasets. In addition to the Salinas and University of Pavia datasets, which are commonly used in HSI research, we include samples from our Surrey BC dataset. Our proposed method results compares favorably to traditional algorithms as well as to some recently published deep learning-based algorithms.


2019 ◽  
Vol 11 (15) ◽  
pp. 1794 ◽  
Author(s):  
Wenju Wang ◽  
Shuguang Dou ◽  
Sen Wang

The connection structure in the convolutional layers of most deep learning-based algorithms used for the classification of hyperspectral images (HSIs) has typically been in the forward direction. In this study, an end-to-end alternately updated spectral–spatial convolutional network (AUSSC) with a recurrent feedback structure is used to learn refined spectral and spatial features for HSI classification. The proposed AUSSC includes alternating updated blocks in which each layer serves as both an input and an output for the other layers. The AUSSC can refine spectral and spatial features many times under fixed parameters. A center loss function is introduced as an auxiliary objective function to improve the discrimination of features acquired by the model. Additionally, the AUSSC utilizes smaller convolutional kernels than other convolutional neural network (CNN)-based methods to reduce the number of parameters and alleviate overfitting. The proposed method was implemented on four HSI data sets, as follows: Indian Pines, Kennedy Space Center, Salinas Scene, and Houston. Experimental results demonstrated that the proposed AUSSC outperformed the HSI classification accuracy obtained by state-of-the-art deep learning-based methods with a small number of training samples.


Author(s):  
A. Kianisarkaleh ◽  
H. Ghassemian ◽  
F. Razzazi

Feature extraction plays a key role in hyperspectral images classification. Using unlabeled samples, often unlimitedly available, unsupervised and semisupervised feature extraction methods show better performance when limited number of training samples exists. This paper illustrates the importance of selecting appropriate unlabeled samples that used in feature extraction methods. Also proposes a new method for unlabeled samples selection using spectral and spatial information. The proposed method has four parts including: PCA, prior classification, posterior classification and sample selection. As hyperspectral image passes these parts, selected unlabeled samples can be used in arbitrary feature extraction methods. The effectiveness of the proposed unlabeled selected samples in unsupervised and semisupervised feature extraction is demonstrated using two real hyperspectral datasets. Results show that through selecting appropriate unlabeled samples, the proposed method can improve the performance of feature extraction methods and increase classification accuracy.


Sign in / Sign up

Export Citation Format

Share Document