Hyperspectral Data Feature Extraction Using Deep Learning Hybrid Model

2018 ◽  
Vol 102 (4) ◽  
pp. 3529-3543
Author(s):  
Xinhua Jiang ◽  
Heru Xue ◽  
Lina Zhang ◽  
Xiaojing Gao ◽  
Yanqing Zhou ◽  
...  
2021 ◽  
Vol 13 (8) ◽  
pp. 1602
Author(s):  
Qiaoqiao Sun ◽  
Xuefeng Liu ◽  
Salah Bourennane

Deep learning models have strong abilities in learning features and they have been successfully applied in hyperspectral images (HSIs). However, the training of most deep learning models requires labeled samples and the collection of labeled samples are labor-consuming in HSI. In addition, single-level features from a single layer are usually considered, which may result in the loss of some important information. Using multiple networks to obtain multi-level features is a solution, but at the cost of longer training time and computational complexity. To solve these problems, a novel unsupervised multi-level feature extraction framework that is based on a three dimensional convolutional autoencoder (3D-CAE) is proposed in this paper. The designed 3D-CAE is stacked by fully 3D convolutional layers and 3D deconvolutional layers, which allows for the spectral-spatial information of targets to be mined simultaneously. Besides, the 3D-CAE can be trained in an unsupervised way without involving labeled samples. Moreover, the multi-level features are directly obtained from the encoded layers with different scales and resolutions, which is more efficient than using multiple networks to get them. The effectiveness of the proposed multi-level features is verified on two hyperspectral data sets. The results demonstrate that the proposed method has great promise in unsupervised feature learning and can help us to further improve the hyperspectral classification when compared with single-level features.


2020 ◽  
Vol 12 (2) ◽  
pp. 280 ◽  
Author(s):  
Liqin Liu ◽  
Zhenwei Shi ◽  
Bin Pan ◽  
Ning Zhang ◽  
Huanlin Luo ◽  
...  

In recent years, deep learning technology has been widely used in the field of hyperspectral image classification and achieved good performance. However, deep learning networks need a large amount of training samples, which conflicts with the limited labeled samples of hyperspectral images. Traditional deep networks usually construct each pixel as a subject, ignoring the integrity of the hyperspectral data and the methods based on feature extraction are likely to lose the edge information which plays a crucial role in the pixel-level classification. To overcome the limit of annotation samples, we propose a new three-channel image build method (virtual RGB image) by which the trained networks on natural images are used to extract the spatial features. Through the trained network, the hyperspectral data are disposed as a whole. Meanwhile, we propose a multiscale feature fusion method to combine both the detailed and semantic characteristics, thus promoting the accuracy of classification. Experiments show that the proposed method can achieve ideal results better than the state-of-art methods. In addition, the virtual RGB image can be extended to other hyperspectral processing methods that need to use three-channel images.


2019 ◽  
Vol 8 (3) ◽  
pp. 1163-1166

User quest for information has led to development of Question Answer (QA) system to provide relevant answers to user questions. The QA task are different than normal NLP tasks as they heavily depend to semantics and context of given data. Retrieving and predicting answers to verity of questions require understanding of question, relevance with context and identifying and retrieving of suitable answers. Deep learning helps to produce impressive performance as it employs deep neural network with automatic feature extraction methods. The paper proposes a hybrid model to identify suitable answer for posed question. The proposes power exploits the power of CNN for extracting features and ability of LSTM for considering long term dependencies and semantic of context and question. Paper provides a comparative analysis on deep learning methods useful for predicting answer with the proposed method .The model is implemented on twenty tasks of babI dataset of Facebook .


2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Chen Xing ◽  
Li Ma ◽  
Xiaoquan Yang

Deep learning methods have been successfully applied to learn feature representations for high-dimensional data, where the learned features are able to reveal the nonlinear properties exhibited in the data. In this paper, deep learning method is exploited for feature extraction of hyperspectral data, and the extracted features can provide good discriminability for classification task. Training a deep network for feature extraction and classification includes unsupervised pretraining and supervised fine-tuning. We utilized stacked denoise autoencoder (SDAE) method to pretrain the network, which is robust to noise. In the top layer of the network, logistic regression (LR) approach is utilized to perform supervised fine-tuning and classification. Since sparsity of features might improve the separation capability, we utilized rectified linear unit (ReLU) as activation function in SDAE to extract high level and sparse features. Experimental results using Hyperion, AVIRIS, and ROSIS hyperspectral data demonstrated that the SDAE pretraining in conjunction with the LR fine-tuning and classification (SDAE_LR) can achieve higher accuracies than the popular support vector machine (SVM) classifier.


2020 ◽  
Vol 39 (4) ◽  
pp. 5699-5711
Author(s):  
Shirong Long ◽  
Xuekong Zhao

The smart teaching mode overcomes the shortcomings of traditional teaching online and offline, but there are certain deficiencies in the real-time feature extraction of teachers and students. In view of this, this study uses the particle swarm image recognition and deep learning technology to process the intelligent classroom video teaching image and extracts the classroom task features in real time and sends them to the teacher. In order to overcome the shortcomings of the premature convergence of the standard particle swarm optimization algorithm, an improved strategy for multiple particle swarm optimization algorithms is proposed. In order to improve the premature problem in the search performance algorithm of PSO algorithm, this paper combines the algorithm with the useful attributes of other algorithms to improve the particle diversity in the algorithm, enhance the global search ability of the particle, and achieve effective feature extraction. The research indicates that the method proposed in this paper has certain practical effects and can provide theoretical reference for subsequent related research.


2020 ◽  
Author(s):  
Anusha Ampavathi ◽  
Vijaya Saradhi T

UNSTRUCTURED Big data and its approaches are generally helpful for healthcare and biomedical sectors for predicting the disease. For trivial symptoms, the difficulty is to meet the doctors at any time in the hospital. Thus, big data provides essential data regarding the diseases on the basis of the patient’s symptoms. For several medical organizations, disease prediction is important for making the best feasible health care decisions. Conversely, the conventional medical care model offers input as structured that requires more accurate and consistent prediction. This paper is planned to develop the multi-disease prediction using the improvised deep learning concept. Here, the different datasets pertain to “Diabetes, Hepatitis, lung cancer, liver tumor, heart disease, Parkinson’s disease, and Alzheimer’s disease”, from the benchmark UCI repository is gathered for conducting the experiment. The proposed model involves three phases (a) Data normalization (b) Weighted normalized feature extraction, and (c) prediction. Initially, the dataset is normalized in order to make the attribute's range at a certain level. Further, weighted feature extraction is performed, in which a weight function is multiplied with each attribute value for making large scale deviation. Here, the weight function is optimized using the combination of two meta-heuristic algorithms termed as Jaya Algorithm-based Multi-Verse Optimization algorithm (JA-MVO). The optimally extracted features are subjected to the hybrid deep learning algorithms like “Deep Belief Network (DBN) and Recurrent Neural Network (RNN)”. As a modification to hybrid deep learning architecture, the weight of both DBN and RNN is optimized using the same hybrid optimization algorithm. Further, the comparative evaluation of the proposed prediction over the existing models certifies its effectiveness through various performance measures.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1288
Author(s):  
Cinmayii A. Garillos-Manliguez ◽  
John Y. Chiang

Fruit maturity is a critical factor in the supply chain, consumer preference, and agriculture industry. Most classification methods on fruit maturity identify only two classes: ripe and unripe, but this paper estimates six maturity stages of papaya fruit. Deep learning architectures have gained respect and brought breakthroughs in unimodal processing. This paper suggests a novel non-destructive and multimodal classification using deep convolutional neural networks that estimate fruit maturity by feature concatenation of data acquired from two imaging modes: visible-light and hyperspectral imaging systems. Morphological changes in the sample fruits can be easily measured with RGB images, while spectral signatures that provide high sensitivity and high correlation with the internal properties of fruits can be extracted from hyperspectral images with wavelength range in between 400 nm and 900 nm—factors that must be considered when building a model. This study further modified the architectures: AlexNet, VGG16, VGG19, ResNet50, ResNeXt50, MobileNet, and MobileNetV2 to utilize multimodal data cubes composed of RGB and hyperspectral data for sensitivity analyses. These multimodal variants can achieve up to 0.90 F1 scores and 1.45% top-2 error rate for the classification of six stages. Overall, taking advantage of multimodal input coupled with powerful deep convolutional neural network models can classify fruit maturity even at refined levels of six stages. This indicates that multimodal deep learning architectures and multimodal imaging have great potential for real-time in-field fruit maturity estimation that can help estimate optimal harvest time and other in-field industrial applications.


2021 ◽  
pp. 1063293X2198894
Author(s):  
Prabira Kumar Sethy ◽  
Santi Kumari Behera ◽  
Nithiyakanthan Kannan ◽  
Sridevi Narayanan ◽  
Chanki Pandey

Paddy is an essential nutrient worldwide. Rice gives 21% of worldwide human per capita energy and 15% of per capita protein. Asia represented 60% of the worldwide populace, about 92% of the world’s rice creation, and 90% of worldwide rice utilization. With the increase in population, the demand for rice is increased. So, the productivity of farming is needed to be enhanced by introducing new technology. Deep learning and IoT are hot topics for research in various fields. This paper suggested a setup comprising deep learning and IoT for monitoring of paddy field remotely. The vgg16 pre-trained network is considered for the identification of paddy leaf diseases and nitrogen status estimation. Here, two strategies are carried out to identify images: transfer learning and deep feature extraction. The deep feature extraction approach is combined with a support vector machine (SVM) to classify images. The transfer learning approach of vgg16 for identifying four types of leaf diseases and prediction of nitrogen status results in 79.86% and 84.88% accuracy. Again, the deep features of Vgg16 and SVM results for identifying four types of leaf diseases and prediction of nitrogen status have achieved an accuracy of 97.31% and 99.02%, respectively. Besides, a framework is suggested for monitoring of paddy field remotely based on IoT and deep learning. The suggested prototype’s superiority is that it controls temperature and humidity like the state-of-the-art and can monitor the additional two aspects, such as detecting nitrogen status and diseases.


2021 ◽  
Vol 7 (5) ◽  
pp. 89
Author(s):  
George K. Sidiropoulos ◽  
Polixeni Kiratsa ◽  
Petros Chatzipetrou ◽  
George A. Papakostas

This paper aims to provide a brief review of the feature extraction methods applied for finger vein recognition. The presented study is designed in a systematic way in order to bring light to the scientific interest for biometric systems based on finger vein biometric features. The analysis spans over a period of 13 years (from 2008 to 2020). The examined feature extraction algorithms are clustered into five categories and are presented in a qualitative manner by focusing mainly on the techniques applied to represent the features of the finger veins that uniquely prove a human’s identity. In addition, the case of non-handcrafted features learned in a deep learning framework is also examined. The conducted literature analysis revealed the increased interest in finger vein biometric systems as well as the high diversity of different feature extraction methods proposed over the past several years. However, last year this interest shifted to the application of Convolutional Neural Networks following the general trend of applying deep learning models in a range of disciplines. Finally, yet importantly, this work highlights the limitations of the existing feature extraction methods and describes the research actions needed to face the identified challenges.


Sign in / Sign up

Export Citation Format

Share Document