scholarly journals Hyperspectral Image Classification Based on a Shuffled Group Convolutional Neural Network with Transfer Learning

2020 ◽  
Vol 12 (11) ◽  
pp. 1780 ◽  
Author(s):  
Yao Liu ◽  
Lianru Gao ◽  
Chenchao Xiao ◽  
Ying Qu ◽  
Ke Zheng ◽  
...  

Convolutional neural networks (CNNs) have been widely applied in hyperspectral imagery (HSI) classification. However, their classification performance might be limited by the scarcity of labeled data to be used for training and validation. In this paper, we propose a novel lightweight shuffled group convolutional neural network (abbreviated as SG-CNN) to achieve efficient training with a limited training dataset in HSI classification. SG-CNN consists of SG conv units that employ conventional and atrous convolution in different groups, followed by channel shuffle operation and shortcut connection. In this way, SG-CNNs have less trainable parameters, whilst they can still be accurately and efficiently trained with fewer labeled samples. Transfer learning between different HSI datasets is also applied on the SG-CNN to further improve the classification accuracy. To evaluate the effectiveness of SG-CNNs for HSI classification, experiments have been conducted on three public HSI datasets pretrained on HSIs from different sensors. SG-CNNs with different levels of complexity were tested, and their classification results were compared with fine-tuned ShuffleNet2, ResNeXt, and their original counterparts. The experimental results demonstrate that SG-CNNs can achieve competitive classification performance when the amount of labeled data for training is poor, as well as efficiently providing satisfying classification results.

2021 ◽  
Vol 13 (3) ◽  
pp. 335
Author(s):  
Yuhao Qing ◽  
Wenyi Liu

In recent years, image classification on hyperspectral imagery utilizing deep learning algorithms has attained good results. Thus, spurred by that finding and to further improve the deep learning classification accuracy, we propose a multi-scale residual convolutional neural network model fused with an efficient channel attention network (MRA-NET) that is appropriate for hyperspectral image classification. The suggested technique comprises a multi-staged architecture, where initially the spectral information of the hyperspectral image is reduced into a two-dimensional tensor, utilizing a principal component analysis (PCA) scheme. Then, the constructed low-dimensional image is input to our proposed ECA-NET deep network, which exploits the advantages of its core components, i.e., multi-scale residual structure and attention mechanisms. We evaluate the performance of the proposed MRA-NET on three public available hyperspectral datasets and demonstrate that, overall, the classification accuracy of our method is 99.82 %, 99.81%, and 99.37, respectively, which is higher compared to the corresponding accuracy of current networks such as 3D convolutional neural network (CNN), three-dimensional residual convolution structure (RES-3D-CNN), and space–spectrum joint deep network (SSRN).


2021 ◽  
Vol 17 (12) ◽  
pp. e1009706
Author(s):  
Ralph Simon ◽  
Karol Bakunowski ◽  
Angel Eduardo Reyes-Vasques ◽  
Marco Tschapka ◽  
Mirjam Knörnschild ◽  
...  

Bat-pollinated flowers have to attract their pollinators in absence of light and therefore some species developed specialized echoic floral parts. These parts are usually concave shaped and act like acoustic retroreflectors making the flowers acoustically conspicuous to the bats. Acoustic plant specializations only have been described for two bat-pollinated species in the Neotropics and one other bat-dependent plant in South East Asia. However, it remains unclear whether other bat-pollinated plant species also show acoustic adaptations. Moreover, acoustic traits have never been compared between bat-pollinated flowers and flowers belonging to other pollination syndromes. To investigate acoustic traits of bat-pollinated flowers we recorded a dataset of 32320 flower echoes, collected from 168 individual flowers belonging to 12 different species. 6 of these species were pollinated by bats and 6 species were pollinated by insects or hummingbirds. We analyzed the spectral target strength of the flowers and trained a convolutional neural network (CNN) on the spectrograms of the flower echoes. We found that bat-pollinated flowers have a significantly higher echo target strength, independent of their size, and differ in their morphology, specifically in the lower variance of their morphological features. We found that a good classification accuracy by our CNN (up to 84%) can be achieved with only one echo/spectrogram to classify the 12 different plant species, both bat-pollinated and otherwise, with bat-pollinated flowers being easier to classify. The higher classification performance of bat-pollinated flowers can be explained by the lower variance of their morphology.


2021 ◽  
Vol 2021 ◽  
pp. 1-19
Author(s):  
Douglas Omwenga Nyabuga ◽  
Jinling Song ◽  
Guohua Liu ◽  
Michael Adjeisah

As one of the fast evolution of remote sensing and spectral imagery techniques, hyperspectral image (HSI) classification has attracted considerable attention in various fields, including land survey, resource monitoring, and among others. Nonetheless, due to a lack of distinctiveness in the hyperspectral pixels of separate classes, there is a recurrent inseparability obstacle in the primary space. Additionally, an open challenge stems from examining efficient techniques that can speedily classify and interpret the spectral-spatial data bands within a more precise computational time. Hence, in this work, we propose a 3D-2D convolutional neural network and transfer learning model where the early layers of the model exploit 3D convolutions to modeling spectral-spatial information. On top of it are 2D convolutional layers to handle semantic abstraction mainly. Toward simplicity and a highly modularized network for image classification, we leverage the ResNeXt-50 block for our model. Furthermore, improving the separability among classes and balance of the interclass and intraclass criteria, we engaged principal component analysis (PCA) for the best orthogonal vectors for representing information from HSIs before feeding to the network. The experimental result shows that our model can efficiently improve the hyperspectral imagery classification, including an instantaneous representation of the spectral-spatial information. Our model evaluation on five publicly available hyperspectral datasets, Indian Pines (IP), Pavia University Scene (PU), Salinas Scene (SA), Botswana (BS), and Kennedy Space Center (KSC), was performed with a high classification accuracy of 99.85%, 99.98%, 100%, 99.82%, and 99.71%, respectively. Quantitative results demonstrated that it outperformed several state-of-the-arts (SOTA), deep neural network-based approaches, and standard classifiers. Thus, it has provided more insight into hyperspectral image classification.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Young-Gon Kim ◽  
Sungchul Kim ◽  
Cristina Eunbee Cho ◽  
In Hye Song ◽  
Hee Jin Lee ◽  
...  

AbstractFast and accurate confirmation of metastasis on the frozen tissue section of intraoperative sentinel lymph node biopsy is an essential tool for critical surgical decisions. However, accurate diagnosis by pathologists is difficult within the time limitations. Training a robust and accurate deep learning model is also difficult owing to the limited number of frozen datasets with high quality labels. To overcome these issues, we validated the effectiveness of transfer learning from CAMELYON16 to improve performance of the convolutional neural network (CNN)-based classification model on our frozen dataset (N = 297) from Asan Medical Center (AMC). Among the 297 whole slide images (WSIs), 157 and 40 WSIs were used to train deep learning models with different dataset ratios at 2, 4, 8, 20, 40, and 100%. The remaining, i.e., 100 WSIs, were used to validate model performance in terms of patch- and slide-level classification. An additional 228 WSIs from Seoul National University Bundang Hospital (SNUBH) were used as an external validation. Three initial weights, i.e., scratch-based (random initialization), ImageNet-based, and CAMELYON16-based models were used to validate their effectiveness in external validation. In the patch-level classification results on the AMC dataset, CAMELYON16-based models trained with a small dataset (up to 40%, i.e., 62 WSIs) showed a significantly higher area under the curve (AUC) of 0.929 than those of the scratch- and ImageNet-based models at 0.897 and 0.919, respectively, while CAMELYON16-based and ImageNet-based models trained with 100% of the training dataset showed comparable AUCs at 0.944 and 0.943, respectively. For the external validation, CAMELYON16-based models showed higher AUCs than those of the scratch- and ImageNet-based models. Model performance for slide feasibility of the transfer learning to enhance model performance was validated in the case of frozen section datasets with limited numbers.


Sign in / Sign up

Export Citation Format

Share Document