scholarly journals Deep Ensembles for Hyperspectral Image Data Classification and Unmixing

2021 ◽  
Vol 13 (20) ◽  
pp. 4133
Author(s):  
Jakub Nalepa ◽  
Michal Myller ◽  
Lukasz Tulczyjew ◽  
Michal Kawulok

Hyperspectral images capture very detailed information about scanned objects and, hence, can be used to uncover various characteristics of the materials present in the analyzed scene. However, such image data are difficult to transfer due to their large volume, and generating new ground-truth datasets that could be utilized to train supervised learners is costly, time-consuming, very user-dependent, and often infeasible in practice. The research efforts have been focusing on developing algorithms for hyperspectral data classification and unmixing, which are two main tasks in the analysis chain of such imagery. Although in both of them, the deep learning techniques have bloomed as an extremely effective tool, designing the deep models that generalize well over the unseen data is a serious practical challenge in emerging applications. In this paper, we introduce the deep ensembles benefiting from different architectural advances of convolutional base models and suggest a new approach towards aggregating the outputs of base learners using a supervised fuser. Furthermore, we propose a model augmentation technique that allows us to synthesize new deep networks based on the original one by injecting Gaussian noise into the model’s weights. The experiments, performed for both hyperspectral data classification and unmixing, show that our deep ensembles outperform base spectral and spectral-spatial deep models and classical ensembles employing voting and averaging as a fusing scheme in both hyperspectral image analysis tasks.

Author(s):  
J. González Santiago ◽  
F. Schenkel ◽  
W. Gross ◽  
W. Middelmann

Abstract. The application of hyperspectral image analysis for land cover classification is mainly executed in presence of manually labeled data. The ground truth represents the distribution of the actual classes and it is mostly derived from field recorded information. Its manual generation is ineffective, tedious and very time-consuming. The continuously increasing amount of proprietary and publicly available datasets makes it imperative to reduce these related costs. In addition, adequately equipped computer systems are more capable of identifying patterns and neighbourhood relationships than a human operator. Based on these facts, an unsupervised labeling approach is presented to automatically generate labeled images used during the training of a convolutional neural network (CNN) classifier. The proposed method begins with the segmentation stage where an adapted version of the simple linear iterative clustering (SLIC) algorithm for dealing with hyperspectral data is used. Consequently, the Hierarchical Agglomerative Clustering (HAC) and Fuzzy C-Means (FCM) algorithms are employed to efficiently group similar superpixels considering distances with respect to each other. The distinct utilization of these clustering techniques defines a complementary stage for overcoming class overlapping during image generation. Ultimately, a CNN classifier is trained using the computed image to pixel-wise predict classes on unseen datasets. The labeling results, obtained using two hyperspectral benchmark datasets, indicate that the current approach is able to detect objects boundaries, automatically assign class labels to the entire dataset and to classify new data with a prediction certainty of 90%. Additionally, this method is also capable of achieving better classification accuracy and visual correspondence with reality than the ground truth images.


2021 ◽  
pp. 1-21
Author(s):  
Margarita Georgievna Kuzmina

A model of five-layered autoencoder (stacked autoencoder, SAE) is suggested for deep image features extraction and deriving compressed hyperspectral data set specifying the image. Spectral cost function, dependent on spectral curve forms of hyperspectral image, has been used for the autoencoder tuning. At the first step the autoencoder capabilities will be tested based on using pure spectral information contained in image data. The images from well known and widely used hyperspectral databases (Indian Pines, Pavia University и KSC) are planned to be used for the model testing.


2018 ◽  
Vol 4 (12) ◽  
pp. 142 ◽  
Author(s):  
Hongda Shen ◽  
Zhuocheng Jiang ◽  
W. Pan

Hyperspectral imaging (HSI) technology has been used for various remote sensing applications due to its excellent capability of monitoring regions-of-interest over a period of time. However, the large data volume of four-dimensional multitemporal hyperspectral imagery demands massive data compression techniques. While conventional 3D hyperspectral data compression methods exploit only spatial and spectral correlations, we propose a simple yet effective predictive lossless compression algorithm that can achieve significant gains on compression efficiency, by also taking into account temporal correlations inherent in the multitemporal data. We present an information theoretic analysis to estimate potential compression performance gain with varying configurations of context vectors. Extensive simulation results demonstrate the effectiveness of the proposed algorithm. We also provide in-depth discussions on how to construct the context vectors in the prediction model for both multitemporal HSI and conventional 3D HSI data.


2016 ◽  
Vol 2016 ◽  
pp. 1-17 ◽  
Author(s):  
Johannes Jordan ◽  
Elli Angelopoulou ◽  
Andreas Maier

Multispectral and hyperspectral images are well established in various fields of application like remote sensing, astronomy, and microscopic spectroscopy. In recent years, the availability of new sensor designs, more powerful processors, and high-capacity storage further opened this imaging modality to a wider array of applications like medical diagnosis, agriculture, and cultural heritage. This necessitates new tools that allow general analysis of the image data and are intuitive to users who are new to hyperspectral imaging. We introduce a novel framework that bundles new interactive visualization techniques with powerful algorithms and is accessible through an efficient and intuitive graphical user interface. We visualize the spectral distribution of an image via parallel coordinates with a strong link to traditional visualization techniques, enabling new paradigms in hyperspectral image analysis that focus on interactive raw data exploration. We combine novel methods for supervised segmentation, global clustering, and nonlinear false-color coding to assist in the visual inspection. Our framework coined Gerbil is open source and highly modular, building on established methods and being easily extensible for application-specific needs. It satisfies the need for a general, consistent software framework that tightly integrates analysis algorithms with an intuitive, modern interface to the raw image data and algorithmic results. Gerbil finds its worldwide use in academia and industry alike with several thousand downloads originating from 45 countries.


Author(s):  
Haoyi Zhou ◽  
Jun Zhou ◽  
Haichuan Yang ◽  
Cheng Yan ◽  
Xiao Bai ◽  
...  

Imaging devices are of increasing use in environmental research requiring an urgent need to deal with such issues as image data, feature matching over different dimensions. Among them, matching hyperspectral image with other types of images is challenging due to the high dimensional nature of hyperspectral data. This chapter addresses this problem by investigating structured support vector machines to construct and learn a graph-based model for each type of image. The graph model incorporates both low-level features and stable correspondences within images. The inherent characteristics are depicted by using a graph matching algorithm on extracted weighted graph models. The effectiveness of this method is demonstrated through experiments on matching hyperspectral images to RGB images, and hyperspectral images with different dimensions on images of natural objects.


Sign in / Sign up

Export Citation Format

Share Document