scholarly journals Deep Belief Network for Spectral–Spatial Classification of Hyperspectral Remote Sensor Data

Sensors ◽  
2019 ◽  
Vol 19 (1) ◽  
pp. 204 ◽  
Author(s):  
Chenming Li ◽  
Yongchang Wang ◽  
Xiaoke Zhang ◽  
Hongmin Gao ◽  
Yao Yang ◽  
...  

With the development of high-resolution optical sensors, the classification of ground objects combined with multivariate optical sensors is a hot topic at present. Deep learning methods, such as convolutional neural networks, are applied to feature extraction and classification. In this work, a novel deep belief network (DBN) hyperspectral image classification method based on multivariate optical sensors and stacked by restricted Boltzmann machines is proposed. We introduced the DBN framework to classify spatial hyperspectral sensor data on the basis of DBN. Then, the improved method (combination of spectral and spatial information) was verified. After unsupervised pretraining and supervised fine-tuning, the DBN model could successfully learn features. Additionally, we added a logistic regression layer that could classify the hyperspectral images. Moreover, the proposed training method, which fuses spectral and spatial information, was tested over the Indian Pines and Pavia University datasets. The advantages of this method over traditional methods are as follows: (1) the network has deep structure and the ability of feature extraction is stronger than traditional classifiers; (2) experimental results indicate that our method outperforms traditional classification and other deep learning approaches.

2021 ◽  
Vol 13 (9) ◽  
pp. 1732
Author(s):  
Hadis Madani ◽  
Kenneth McIsaac

Pixel-wise classification of hyperspectral images (HSIs) from remote sensing data is a common approach for extracting information about scenes. In recent years, approaches based on deep learning techniques have gained wide applicability. An HSI dataset can be viewed either as a collection of images, each one captured at a different wavelength, or as a collection of spectra, each one associated with a specific point (pixel). Enhanced classification accuracy is enabled if the spectral and spatial information are combined in the input vector. This allows simultaneous classification according to spectral type but also according to geometric relationships. In this study, we proposed a novel spatial feature vector which improves accuracies in pixel-wise classification. Our proposed feature vector is based on the distance transform of the pixels with respect to the dominant edges in the input HSI. In other words, we allow the location of pixels within geometric subdivisions of the dataset to modify the contribution of each pixel to the spatial feature vector. Moreover, we used the extended multi attribute profile (EMAP) features to add more geometric features to the proposed spatial feature vector. We have performed experiments with three hyperspectral datasets. In addition to the Salinas and University of Pavia datasets, which are commonly used in HSI research, we include samples from our Surrey BC dataset. Our proposed method results compares favorably to traditional algorithms as well as to some recently published deep learning-based algorithms.


Author(s):  
A. Kianisarkaleh ◽  
H. Ghassemian ◽  
F. Razzazi

Feature extraction plays a key role in hyperspectral images classification. Using unlabeled samples, often unlimitedly available, unsupervised and semisupervised feature extraction methods show better performance when limited number of training samples exists. This paper illustrates the importance of selecting appropriate unlabeled samples that used in feature extraction methods. Also proposes a new method for unlabeled samples selection using spectral and spatial information. The proposed method has four parts including: PCA, prior classification, posterior classification and sample selection. As hyperspectral image passes these parts, selected unlabeled samples can be used in arbitrary feature extraction methods. The effectiveness of the proposed unlabeled selected samples in unsupervised and semisupervised feature extraction is demonstrated using two real hyperspectral datasets. Results show that through selecting appropriate unlabeled samples, the proposed method can improve the performance of feature extraction methods and increase classification accuracy.


2019 ◽  
Vol 9 (7) ◽  
pp. 1379 ◽  
Author(s):  
Ke Li ◽  
Mingju Wang ◽  
Yixin Liu ◽  
Nan Yu ◽  
Wei Lan

The classification of hyperspectral data using deep learning methods can obtain better results than the previous shallow classifiers, but deep learning algorithms have some limitations. These algorithms require a large amount of data to train the network, while also needing a certain amount of labeled data to fine-tune the network. In this paper, we propose a new hyperspectral data processing method based on transfer learning and the deep learning method. First, we use a hyperspectral data set that is similar to the target data set to pre-train the deep learning network. Then, we use the transfer learning method to find the common features of the source domain data and target domain data. Second, we propose a model structure that combines the deep transfer learning model to utilize a combination of spatial information and spectral information. Using transfer learning, we can obtain the spectral features. Then, we obtain several principal components of the target data. These will be regarded as the spatial features of the target domain data, and we use the joint features for the classifier. The data are obtained from a hyperspectral public database. Using the same amount of data, our method based on transfer learning and deep belief network obtains better classification accuracy in a shorter amount of time.


2018 ◽  
Vol 58 (5) ◽  
pp. 297
Author(s):  
Benbakreti Samir ◽  
Aoued Boukelif

In this paper, we present a neural approach for an unconstrained Arabic manuscript recognition using the online writing signal rather than images. First, we build the database which contains 2800 characters and 4800 words collected from 20 different handwritings. Thereafter, we will perform the pretreatment, feature extraction and classification phases, respectively. The use of a classical neural network methods has been beneficial for the character recognition, but revealed some limitations for the recognition rate of Arabic words. To remedy this, we used a deep learning through the Deep Belief Network (DBN) that resulted in a 97.08% success rate of recognition for Arabic words.


Author(s):  
H. Teffahi ◽  
N. Teffahi

Abstract. The classification of hyperspectral image (HSI) with high spectral and spatial resolution represents an important and challenging task in image processing and remote sensing (RS) domains due to the problem of computational complexity and big dimensionality of the remote sensing images. The spatial and spectral pixel characteristics have crucial significance for hyperspectral image classification and to take into account these two types of characteristics, various classification and feature extraction methods have been developed to improve spectral-spatial classification of remote sensing images for thematic mapping purposes such as agricultural mapping, urban mapping, emergency mapping in case of natural disasters... In recent years, mathematical morphology and deep learning (DL) have been recognized as prominent feature extraction techniques that led to remarkable spectral-spatial classification performances. Among them, Extended Multi-Attribute Profiles (EMAP) and Dense Convolutional Neural Network (DCNN) are considered as robust and powerful approaches such as the work in this paper is based on these two techniques for the feature extraction stage and used in two combined manners and constructing the EMAP-DCNN frame. The experiments were conducted on two popular datasets: “Indian Pines” and “Huston” hyperspectral datasets. Experimental results demonstrate that the two proposed approaches of the EMAP-DCNN frame denoted EMAP-DCNN 1, EMAP-DCNN 2 provide competitive performances compared with some state-of-the-art spectral-spatial classification methods based on deep learning.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Xiaoai Dai ◽  
Junying Cheng ◽  
Yu Gao ◽  
Shouheng Guo ◽  
Xingping Yang ◽  
...  

Reducing the dimension of the hyperspectral image data can directly reduce the redundancy of the data, thus improving the accuracy of hyperspectral image classification. In this paper, the deep belief network algorithm in the theory of deep learning is introduced to extract the in-depth features of the imaging spectral image data. Firstly, the original data is mapped to feature space by unsupervised learning methods through the Restricted Boltzmann Machine (RBM). Then, a deep belief network will be formed by superimposed multiple Restricted Boltzmann Machines and training the model parameters by using the greedy algorithm layer by layer. At the same time, as the objective of data dimensionality reduction is achieved, the underground feature construction of the original data will be formed. The final step is to connect the depth features of the output to the Softmax regression classifier to complete the fine-tuning (FT) of the model and the final classification. Experiments using imaging spectral data showing the in-depth features extracted by the profound belief network algorithm have better robustness and separability. It can significantly improve the classification accuracy and has a good application prospect in hyperspectral image information extraction.


Author(s):  
Dexiang Zhang ◽  
Jingzhong Kang ◽  
Lina Xun ◽  
Yu Huang

In recent years, deep learning has been widely used in the classification of hyperspectral images and good results have been achieved. But it is easy to ignore the edge information of the image when using the spatial features of hyperspectral images to carry out the classification experiments. In order to make full use of the advantages of convolution neural network (CNN), we extract the spatial information with the method of minimum noise fraction (MNF) and the edge information by bilateral filter. The combination of the two kinds of information not only increases the useful information but also effectively removes part of the noise. The convolution neural network is used to extract features and classify for hyperspectral images on the basis of this fused information. In addition, this paper also uses another kind of edge-filtering method to amend the final classification results for a better accuracy. The proposed method was tested on three public available data sets: the University of Pavia, the Salinas, and the Indian Pines. The competitive results indicate that our approach can realize a classification of different ground targets with a very high accuracy.


2021 ◽  
Vol 12 (1) ◽  
pp. 47
Author(s):  
Dexin Gao ◽  
Xihao Lin

According to the complex fault mechanism of direct current (DC) charging points for electric vehicles (EVs) and the poor application effect of traditional fault diagnosis methods, a new kind of fault diagnosis method for DC charging points for EVs based on deep belief network (DBN) is proposed, which combines the advantages of DBN in feature extraction and processing nonlinear data. This method utilizes the actual measurement data of the charging points to realize the unsupervised feature extraction and parameter fine-tuning of the network, and builds the deep network model to complete the accurate fault diagnosis of the charging points. The effectiveness of this method is examined by comparing with the backpropagation neural network, radial basis function neural network, support vector machine, and convolutional neural network in terms of accuracy and model convergence time. The experimental results prove that the proposed method has a higher fault diagnosis accuracy than the above fault diagnosis methods.


2022 ◽  
Vol 14 (2) ◽  
pp. 355
Author(s):  
Zhen Cheng ◽  
Guanying Huo ◽  
Haisen Li

Due to the strong speckle noise caused by the seabed reverberation which makes it difficult to extract discriminating and noiseless features of a target, recognition and classification of underwater targets using side-scan sonar (SSS) images is a big challenge. Moreover, unlike classification of optical images which can use a large dataset to train the classifier, classification of SSS images usually has to exploit a very small dataset for training, which may cause classifier overfitting. Compared with traditional feature extraction methods using descriptors—such as Haar, SIFT, and LBP—deep learning-based methods are more powerful in capturing discriminating features. After training on a large optical dataset, e.g., ImageNet, direct fine-tuning method brings improvement to the sonar image classification using a small-size SSS image dataset. However, due to the different statistical characteristics between optical images and sonar images, transfer learning methods—e.g., fine-tuning—lack cross-domain adaptability, and therefore cannot achieve very satisfactory results. In this paper, a multi-domain collaborative transfer learning (MDCTL) method with multi-scale repeated attention mechanism (MSRAM) is proposed for improving the accuracy of underwater sonar image classification. In the MDCTL method, low-level characteristic similarity between SSS images and synthetic aperture radar (SAR) images, and high-level representation similarity between SSS images and optical images are used together to enhance the feature extraction ability of the deep learning model. Using different characteristics of multi-domain data to efficiently capture useful features for the sonar image classification, MDCTL offers a new way for transfer learning. MSRAM is used to effectively combine multi-scale features to make the proposed model pay more attention to the shape details of the target excluding the noise. Experimental results of classification show that, in using multi-domain data sets, the proposed method is more stable with an overall accuracy of 99.21%, bringing an improvement of 4.54% compared with the fine-tuned VGG19. Results given by diverse visualization methods also demonstrate that the method is more powerful in feature representation by using the MDCTL and MSRAM.


Sign in / Sign up

Export Citation Format

Share Document