scholarly journals Deep Learning for Land Cover Classification Using Only a Few Bands

2020 ◽  
Vol 12 (12) ◽  
pp. 2000 ◽  
Author(s):  
Chiman Kwan ◽  
Bulent Ayhan ◽  
Bence Budavari ◽  
Yan Lu ◽  
Daniel Perez ◽  
...  

There is an emerging interest in using hyperspectral data for land cover classification. The motivation behind using hyperspectral data is the notion that increasing the number of narrowband spectral channels would provide richer spectral information and thus help improve the land cover classification performance. Although hyperspectral data with hundreds of channels provide detailed spectral signatures, the curse of dimensionality might lead to degradation in the land cover classification performance. Moreover, in some practical applications, hyperspectral data may not be available due to cost, data storage, or bandwidth issues, and RGB and near infrared (NIR) could be the only image bands available for land cover classification. Light detection and ranging (LiDAR) data is another type of data to assist land cover classification especially if the land covers of interest have different heights. In this paper, we examined the performance of two Convolutional Neural Network (CNN)-based deep learning algorithms for land cover classification using only four bands (RGB+NIR) and five bands (RGB+NIR+LiDAR), where these limited number of image bands were augmented using Extended Multi-attribute Profiles (EMAP). The deep learning algorithms were applied to a well-known dataset used in the 2013 IEEE Geoscience and Remote Sensing Society (GRSS) Data Fusion Contest. With EMAP augmentation, the two deep learning algorithms were observed to achieve better land cover classification performance using only four bands as compared to that using all 144 hyperspectral bands.

2020 ◽  
Vol 12 (9) ◽  
pp. 1392 ◽  
Author(s):  
Chiman Kwan ◽  
David Gribben ◽  
Bulent Ayhan ◽  
Sergio Bernabe ◽  
Antonio Plaza ◽  
...  

Hyperspectral (HS) data have found a wide range of applications in recent years. Researchers observed that more spectral information helps land cover classification performance in many cases. However, in some practical applications, HS data may not be available, due to cost, data storage, or bandwidth issues. Instead, users may only have RGB and near infrared (NIR) bands available for land cover classification. Sometimes, light detection and ranging (LiDAR) data may also be available to assist land cover classification. A natural research problem is to investigate how well land cover classification can be achieved under the aforementioned data constraints. In this paper, we investigate the performance of land cover classification while only using four bands (RGB+NIR) or five bands (RGB+NIR+LiDAR). A number of algorithms have been applied to a well-known dataset (2013 IEEE Geoscience and Remote Sensing Society Data Fusion Contest). One key observation is that some algorithms can achieve better land cover classification performance by using only four bands as compared to that of using all 144 bands in the original hyperspectral data with the help of synthetic bands generated by Extended Multi-attribute Profiles (EMAP). Moreover, LiDAR data do improve the land cover classification performance even further.


2018 ◽  
Vol 10 (11) ◽  
pp. 1713 ◽  
Author(s):  
Wenzhi Zhao ◽  
William Emery ◽  
Yanchen Bo ◽  
Jiage Chen

Deep learning has become a standard processing procedure in land cover mapping for remote sensing images. Instead of relying on hand-crafted features, deep learning algorithms, such as Convolutional Neural Networks (CNN) can automatically generate effective feature representations, in order to recognize objects with complex image patterns. However, the rich spatial information still remains unexploited, since most of the deep learning algorithms only focus on small image patches that overlook the contextual information at larger scales. To utilize these contextual information and improve the classification performance for high-resolution imagery, we propose a graph-based model in order to capture the contextual information over semantic segments of the image. First, we explore semantic segments which build on the top of deep features and obtain the initial classification result. Then, we further improve the initial classification results with a higher-order co-occurrence model by extending the existing conditional random field (HCO-CRF) algorithm. Compared to the pixel- and object-based CNN methods, the proposed model achieved better performance in terms of classification accuracy.


Author(s):  
Yuejun Liu ◽  
Yifei Xu ◽  
Xiangzheng Meng ◽  
Xuguang Wang ◽  
Tianxu Bai

Background: Medical imaging plays an important role in the diagnosis of thyroid diseases. In the field of machine learning, multiple dimensional deep learning algorithms are widely used in image classification and recognition, and have achieved great success. Objective: The method based on multiple dimensional deep learning is employed for the auxiliary diagnosis of thyroid diseases based on SPECT images. The performances of different deep learning models are evaluated and compared. Methods: Thyroid SPECT images are collected with three types, they are hyperthyroidism, normal and hypothyroidism. In the pre-processing, the region of interest of thyroid is segmented and the amount of data sample is expanded. Four CNN models, including CNN, Inception, VGG16 and RNN, are used to evaluate deep learning methods. Results: Deep learning based methods have good classification performance, the accuracy is 92.9%-96.2%, AUC is 97.8%-99.6%. VGG16 model has the best performance, the accuracy is 96.2% and AUC is 99.6%. Especially, the VGG16 model with a changing learning rate works best. Conclusion: The standard CNN, Inception, VGG16, and RNN four deep learning models are efficient for the classification of thyroid diseases with SPECT images. The accuracy of the assisted diagnostic method based on deep learning is higher than that of other methods reported in the literature.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 742
Author(s):  
Canh Nguyen ◽  
Vasit Sagan ◽  
Matthew Maimaitiyiming ◽  
Maitiniyazi Maimaitijiang ◽  
Sourav Bhadra ◽  
...  

Early detection of grapevine viral diseases is critical for early interventions in order to prevent the disease from spreading to the entire vineyard. Hyperspectral remote sensing can potentially detect and quantify viral diseases in a nondestructive manner. This study utilized hyperspectral imagery at the plant level to identify and classify grapevines inoculated with the newly discovered DNA virus grapevine vein-clearing virus (GVCV) at the early asymptomatic stages. An experiment was set up at a test site at South Farm Research Center, Columbia, MO, USA (38.92 N, −92.28 W), with two grapevine groups, namely healthy and GVCV-infected, while other conditions were controlled. Images of each vine were captured by a SPECIM IQ 400–1000 nm hyperspectral sensor (Oulu, Finland). Hyperspectral images were calibrated and preprocessed to retain only grapevine pixels. A statistical approach was employed to discriminate two reflectance spectra patterns between healthy and GVCV vines. Disease-centric vegetation indices (VIs) were established and explored in terms of their importance to the classification power. Pixel-wise (spectral features) classification was performed in parallel with image-wise (joint spatial–spectral features) classification within a framework involving deep learning architectures and traditional machine learning. The results showed that: (1) the discriminative wavelength regions included the 900–940 nm range in the near-infrared (NIR) region in vines 30 days after sowing (DAS) and the entire visual (VIS) region of 400–700 nm in vines 90 DAS; (2) the normalized pheophytization index (NPQI), fluorescence ratio index 1 (FRI1), plant senescence reflectance index (PSRI), anthocyanin index (AntGitelson), and water stress and canopy temperature (WSCT) measures were the most discriminative indices; (3) the support vector machine (SVM) was effective in VI-wise classification with smaller feature spaces, while the RF classifier performed better in pixel-wise and image-wise classification with larger feature spaces; and (4) the automated 3D convolutional neural network (3D-CNN) feature extractor provided promising results over the 2D convolutional neural network (2D-CNN) in learning features from hyperspectral data cubes with a limited number of samples.


Author(s):  
Bambang Trisakti ◽  
Dini Oktaviana Ambarwati

Abstract.  Advanced Land Observation Satellite (ALOS) is a Japanese satellite equipped with 3  sensors  i.e.,  PRISM,  AVNIR,  and  PALSAR.  The  Advanced  Visible  and  Near  Infrared Radiometer (AVNIR) provides multi spectral sensors ranging from Visible to Near Infrared to observe  land  and  coastal  zones.  It  has  10  meter  spatial  resolution,  which  can  be  used  to map  land  cover  with  a  scale  of 1:25000.  The  purpose  of  this  research  was  to  determineclassification  for  land  cover  mapping  using  ALOS  AVNIR  data.  Training  samples  were collected  for  11  land  cover  classes  from  Bromo  volcano  by  visually  referring  to  very  high resolution  data  of  IKONOS  panchromatic  data.  The  training  samples  were  divided  into samples  for  classification  input  and  samples  for  accuracy  evaluation.  Principal  component analysis (PCA) was conducted for AVNIR data, and the generated PCA bands were classified using Maximum Likehood  Enhanced Neighbor method. The classification result was filtered and  re-classed  into  8  classes.  Misclassifications  were  evaluated  and  corrected  in  the  post processing  stage.  The  accuracy  of  classifications  results,  before  and  after  post  processing, were  evaluated  using  confusion  matrix  method.  The  result  showed  that  Maximum Likelihood  Enhanced  Neighbor  classifier  with  post  processing  can  produce  land  cover classification  result  of  AVNIR  data  with  good  accuracy  (total  accuracy  94%  and  kappa statistic 0.92).  ALOS AVNIR has been proven as a potential satellite data to map land cover in the study area with good accuracy.


2020 ◽  
pp. 107754632094971 ◽  
Author(s):  
Shoucong Xiong ◽  
Shuai He ◽  
Jianping Xuan ◽  
Qi Xia ◽  
Tielin Shi

Modern machinery becomes more precious with the advance of science, and fault diagnosis is vital for avoiding economical losses or casualties. Among massive diagnosis methods, deep learning algorithms stand out to open an era of intelligent fault diagnosis. Deep residual networks are the state-of-the-art deep learning models which can continuously improve performance by deepening the network structures. However, in vibration-based fault diagnosis, the transient property instability of vibration signal usually calls for time–frequency analysis methods, and the characters of time–frequency matrices are distinct from standard images, which brings some natural limitations for the diagnosis performance of deep learning algorithms. To handle this issue, an enhanced deep residual network named the multilevel correlation stack-deep residual network is proposed in this article. Wavelet packet transform is used to preprocess the sensor signal, and then the proposed multilevel correlation stack-deep residual network uses kernels with different shapes to fully dig various kinds of useful information from any local regions of the processed input. Experiments on two rolling bearing datasets are carried out. Test results show that the multilevel correlation stack-deep residual network exhibits a more satisfactory classification performance than original deep residual networks and other similar methods, revealing significant potentials for realistic fault diagnosis applications.


Sign in / Sign up

Export Citation Format

Share Document