scholarly journals AN UNSUPERVISED LABELING APPROACH FOR HYPERSPECTRAL IMAGE CLASSIFICATION

Author(s):  
J. González Santiago ◽  
F. Schenkel ◽  
W. Gross ◽  
W. Middelmann

Abstract. The application of hyperspectral image analysis for land cover classification is mainly executed in presence of manually labeled data. The ground truth represents the distribution of the actual classes and it is mostly derived from field recorded information. Its manual generation is ineffective, tedious and very time-consuming. The continuously increasing amount of proprietary and publicly available datasets makes it imperative to reduce these related costs. In addition, adequately equipped computer systems are more capable of identifying patterns and neighbourhood relationships than a human operator. Based on these facts, an unsupervised labeling approach is presented to automatically generate labeled images used during the training of a convolutional neural network (CNN) classifier. The proposed method begins with the segmentation stage where an adapted version of the simple linear iterative clustering (SLIC) algorithm for dealing with hyperspectral data is used. Consequently, the Hierarchical Agglomerative Clustering (HAC) and Fuzzy C-Means (FCM) algorithms are employed to efficiently group similar superpixels considering distances with respect to each other. The distinct utilization of these clustering techniques defines a complementary stage for overcoming class overlapping during image generation. Ultimately, a CNN classifier is trained using the computed image to pixel-wise predict classes on unseen datasets. The labeling results, obtained using two hyperspectral benchmark datasets, indicate that the current approach is able to detect objects boundaries, automatically assign class labels to the entire dataset and to classify new data with a prediction certainty of 90%. Additionally, this method is also capable of achieving better classification accuracy and visual correspondence with reality than the ground truth images.

Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Cuijie Zhao ◽  
Hongdong Zhao ◽  
Guozhen Wang ◽  
Hong Chen

At present, the classification of the hyperspectral image (HSI) based on the deep convolutional network has made great progress. Due to the high dimensionality of spectral features, limited samples of ground truth, and high nonlinearity of hyperspectral data, effective classification of HSI based on deep convolutional neural networks is still difficult. This paper proposes a novel deep convolutional network structure, namely, a hybrid depth-separable residual network, for HSI classification, called HDSRN. The HDSRN model organically combines 3D CNN, 2D CNN, multiresidual network ROR, and depth-separable convolutions to extract deeper abstract features. On the one hand, due to the addition of multiresidual structures and skip connections, this model can alleviate the problem of over fitting, help the backpropagation of gradients, and extract features more fully. On the other hand, the depth-separable convolutions are used to learn the spatial feature, which reduces the computational cost and alleviates the decline in accuracy. Extensive experiments on the popular HSI benchmark datasets show that the performance of the proposed network is better than that of the existing prevalent methods.


2021 ◽  
Vol 13 (20) ◽  
pp. 4133
Author(s):  
Jakub Nalepa ◽  
Michal Myller ◽  
Lukasz Tulczyjew ◽  
Michal Kawulok

Hyperspectral images capture very detailed information about scanned objects and, hence, can be used to uncover various characteristics of the materials present in the analyzed scene. However, such image data are difficult to transfer due to their large volume, and generating new ground-truth datasets that could be utilized to train supervised learners is costly, time-consuming, very user-dependent, and often infeasible in practice. The research efforts have been focusing on developing algorithms for hyperspectral data classification and unmixing, which are two main tasks in the analysis chain of such imagery. Although in both of them, the deep learning techniques have bloomed as an extremely effective tool, designing the deep models that generalize well over the unseen data is a serious practical challenge in emerging applications. In this paper, we introduce the deep ensembles benefiting from different architectural advances of convolutional base models and suggest a new approach towards aggregating the outputs of base learners using a supervised fuser. Furthermore, we propose a model augmentation technique that allows us to synthesize new deep networks based on the original one by injecting Gaussian noise into the model’s weights. The experiments, performed for both hyperspectral data classification and unmixing, show that our deep ensembles outperform base spectral and spectral-spatial deep models and classical ensembles employing voting and averaging as a fusing scheme in both hyperspectral image analysis tasks.


NIR news ◽  
2014 ◽  
Vol 25 (7) ◽  
pp. 15-17 ◽  
Author(s):  
Y. Dixit ◽  
R. Cama ◽  
C. Sullivan ◽  
L. Alvarez Jubete ◽  
A. Ktenioudaki

2019 ◽  
Vol 11 (5) ◽  
pp. 476 ◽  
Author(s):  
Muhammad Aufaristama ◽  
Armann Hoskuldsson ◽  
Magnus Ulfarsson ◽  
Ingibjorg Jonsdottir ◽  
Thorvaldur Thordarson

The Holuhraun lava flow was the largest effusive eruption in Iceland for 230 years, with an estimated lava bulk volume of ~1.44 km3 and covering an area of ~84 km2. The six month long eruption at Holuhraun 2014–2015 generated a diverse surface environment. Therefore, the abundant data of airborne hyperspectral imagery above the lava field, calls for the use of time-efficient and accurate methods to unravel them. The hyperspectral data acquisition was acquired five months after the eruption finished, using an airborne FENIX-Hyperspectral sensor that was operated by the Natural Environment Research Council Airborne Research Facility (NERC-ARF). The data were atmospherically corrected using the Quick Atmospheric Correction (QUAC) algorithm. Here we used the Sequential Maximum Angle Convex Cone (SMACC) method to find spectral endmembers and their abundances throughout the airborne hyperspectral image. In total we estimated 15 endmembers, and we grouped these endmembers into six groups; (1) basalt; (2) hot material; (3) oxidized surface; (4) sulfate mineral; (5) water; and (6) noise. These groups were based on the similar shape of the endmembers; however, the amplitude varies due to illumination conditions, spectral variability, and topography. We, thus, obtained the respective abundances from each endmember group using fully constrained linear spectral mixture analysis (LSMA). The methods offer an optimum and a fast selection for volcanic products segregation. However, ground truth spectra are needed for further analysis.


2018 ◽  
Vol 58 (8) ◽  
pp. 1488 ◽  
Author(s):  
S. Rahman ◽  
P. Quin ◽  
T. Walsh ◽  
T. Vidal-Calleja ◽  
M. J. McPhee ◽  
...  

The objectives of the present study were to describe the approach used for classifying surface tissue, and for estimating fat depth in lamb short loins and validating the approach. Fat versus non-fat pixels were classified and then used to estimate the fat depth for each pixel in the hyperspectral image. Estimated reflectance, instead of image intensity or radiance, was used as the input feature for classification. The relationship between reflectance and the fat/non-fat classification label was learnt using support vector machines. Gaussian processes were used to learn regression for fat depth as a function of reflectance. Data to train and test the machine learning algorithms was collected by scanning 16 short loins. The near-infrared hyperspectral camera captured lines of data of the side of the short loin (i.e. with the subcutaneous fat facing the camera). Advanced single-lens reflex camera took photos of the same cuts from above, such that a ground truth of fat depth could be semi-automatically extracted and associated with the hyperspectral data. A subset of the data was used to train the machine learning model, and to test it. The results of classifying pixels as either fat or non-fat achieved a 96% accuracy. Fat depths of up to 12 mm were estimated, with an R2 of 0.59, a mean absolute bias of 1.72 mm and root mean square error of 2.34 mm. The techniques developed and validated in the present study will be used to estimate fat coverage to predict total fat, and, subsequently, lean meat yield in the carcass.


2016 ◽  
Vol 2016 ◽  
pp. 1-9
Author(s):  
Zebin Wu ◽  
Jinping Gu ◽  
Yonglong Li ◽  
Fu Xiao ◽  
Jin Sun ◽  
...  

Due to the increasing dimensionality and volume of remotely sensed hyperspectral data, the development of acceleration techniques for massive hyperspectral image analysis approaches is a very important challenge. Cloud computing offers many possibilities of distributed processing of hyperspectral datasets. This paper proposes a novel distributed parallel endmember extraction method based on iterative error analysis that utilizes cloud computing principles to efficiently process massive hyperspectral data. The proposed method takes advantage of technologies including MapReduce programming model, Hadoop Distributed File System (HDFS), and Apache Spark to realize distributed parallel implementation for hyperspectral endmember extraction, which significantly accelerates the computation of hyperspectral processing and provides high throughput access to large hyperspectral data. The experimental results, which are obtained by extracting endmembers of hyperspectral datasets on a cloud computing platform built on a cluster, demonstrate the effectiveness and computational efficiency of the proposed method.


2020 ◽  
Vol 12 (5) ◽  
pp. 843 ◽  
Author(s):  
Wenzhi Zhao ◽  
Xi Chen ◽  
Jiage Chen ◽  
Yang Qu

Hyperspectral image analysis plays an important role in agriculture, mineral industry, and for military purposes. However, it is quite challenging when classifying high-dimensional hyperspectral data with few labeled samples. Currently, generative adversarial networks (GANs) have been widely used for sample generation, but it is difficult to acquire high-quality samples with unwanted noises and uncontrolled divergences. To generate high-quality hyperspectral samples, a self-attention generative adversarial adaptation network (SaGAAN) is proposed in this work. It aims to increase the number and quality of training samples to avoid the impact of over-fitting. Compared to the traditional GANs, the proposed method has two contributions: (1) it includes a domain adaptation term to constrain generated samples to be more realistic to the original ones; and (2) it uses the self-attention mechanism to capture the long-range dependencies across the spectral bands and further improve the quality of generated samples. To demonstrate the effectiveness of the proposed SaGAAN, we tested it on two well-known hyperspectral datasets: Pavia University and Indian Pines. The experiment results illustrate that the proposed method can greatly improve the classification accuracy, even with a small number of initial labeled samples.


2019 ◽  
Vol 5 (5) ◽  
pp. 52 ◽  
Author(s):  
Alberto Signoroni ◽  
Mattia Savardi ◽  
Annalisa Baronio ◽  
Sergio Benini

Modern hyperspectral imaging systems produce huge datasets potentially conveying a great abundance of information; such a resource, however, poses many challenges in the analysis and interpretation of these data. Deep learning approaches certainly offer a great variety of opportunities for solving classical imaging tasks and also for approaching new stimulating problems in the spatial–spectral domain. This is fundamental in the driving sector of Remote Sensing where hyperspectral technology was born and has mostly developed, but it is perhaps even more true in the multitude of current and evolving application sectors that involve these imaging technologies. The present review develops on two fronts: on the one hand, it is aimed at domain professionals who want to have an updated overview on how hyperspectral acquisition techniques can combine with deep learning architectures to solve specific tasks in different application fields. On the other hand, we want to target the machine learning and computer vision experts by giving them a picture of how deep learning technologies are applied to hyperspectral data from a multidisciplinary perspective. The presence of these two viewpoints and the inclusion of application fields other than Remote Sensing are the original contributions of this review, which also highlights some potentialities and critical issues related to the observed development trends.


2021 ◽  
pp. 1-21
Author(s):  
Margarita Georgievna Kuzmina

A model of five-layered autoencoder (stacked autoencoder, SAE) is suggested for deep image features extraction and deriving compressed hyperspectral data set specifying the image. Spectral cost function, dependent on spectral curve forms of hyperspectral image, has been used for the autoencoder tuning. At the first step the autoencoder capabilities will be tested based on using pure spectral information contained in image data. The images from well known and widely used hyperspectral databases (Indian Pines, Pavia University и KSC) are planned to be used for the model testing.


2019 ◽  
Vol 2019 (1) ◽  
pp. 295-299 ◽  
Author(s):  
Oleksandr Boiko ◽  
Joni Hyttinen ◽  
Pauli Fält ◽  
Heli Jäsberg ◽  
Arash Mirhashemi ◽  
...  

The aim of this work is automatic and efficient detection of medically-relevant features from oral and dental hyperspectral images by applying up-to-date deep learning convolutional neural network techniques. This will help dentists to identify and classify unhealthy areas automatically and to prevent the progression of diseases. Hyperspectral imaging approach allows one to do so without exposing the patient to ionizing X-ray radiation. Spectral imaging provides information in the visible and near-infrared wavelength ranges. The dataset used in this paper contains 116 hyperspectral images from 18 patients taken from different viewing angles. Image annotation (ground truth) includes 38 classes in six different sub-groups assessed by dental experts. Mask region-based convolutional neural network (Mask R-CNN) is used as a deep learning model, for instance segmentation of areas. Preliminary results show high potential and accuracy for classification and segmentation of different classes.


Sign in / Sign up

Export Citation Format

Share Document