scholarly journals Wide Sliding Window and Subsampling Network for Hyperspectral Image Classification

2021 ◽  
Vol 13 (7) ◽  
pp. 1290
Author(s):  
Jiangbo Xi ◽  
Okan K. Ersoy ◽  
Jianwu Fang ◽  
Ming Cong ◽  
Tianjun Wu ◽  
...  

Recently, deep learning methods, for example, convolutional neural networks (CNNs), have achieved high performance in hyperspectral image (HSI) classification. The limited training samples of HSI images make it hard to use deep learning methods with many layers and a large number of convolutional kernels as in large scale imagery tasks, and CNN-based methods usually need long training time. In this paper, we present a wide sliding window and subsampling network (WSWS Net) for HSI classification. It is based on layers of transform kernels with sliding windows and subsampling (WSWS). It can be extended in the wide direction to learn both spatial and spectral features more efficiently. The learned features are subsampled to reduce computational loads and to reduce memorization. Thus, layers of WSWS can learn higher level spatial and spectral features efficiently, and the proposed network can be trained easily by only computing linear weights with least squares. The experimental results show that the WSWS Net achieves excellent performance with different hyperspectral remotes sensing datasets compared with other shallow and deep learning methods. The effects of ratio of training samples, the sizes of image patches, and the visualization of features in WSWS layers are presented.

2021 ◽  
Vol 13 (13) ◽  
pp. 2575
Author(s):  
Jiangbo Xi ◽  
Ming Cong ◽  
Okan K. Ersoy ◽  
Weibao Zou ◽  
Chaoying Zhao ◽  
...  

Recently, deep learning has been successfully and widely used in hyperspectral image (HSI) classification. Considering the difficulty of acquiring HSIs, there are usually a small number of pixels used as the training instances. Therefore, it is hard to fully use the advantages of deep learning networks; for example, the very deep layers with a large number of parameters lead to overfitting. This paper proposed a dynamic wide and deep neural network (DWDNN) for HSI classification, which includes multiple efficient wide sliding window and subsampling (EWSWS) networks and can grow dynamically according to the complexity of the problems. The EWSWS network in the DWDNN was designed both in the wide and deep direction with transform kernels as hidden units. These multiple layers of kernels can extract features from the low to high level, and because they are extended in the wide direction, they can learn features more steadily and smoothly. The sliding windows with the stride and subsampling were designed to reduce the dimension of the features for each layer; therefore, the computational load was reduced. Finally, all the weights were only from the fully connected layer, and the iterative least squares method was used to compute them easily. The proposed DWDNN was tested with several HSI data including the Botswana, Pavia University, and Salinas remote sensing datasets with different numbers of instances (from small to big). The experimental results showed that the proposed method had the highest test accuracies compared to both the typical machine learning methods such as support vector machine (SVM), multilayer perceptron (MLP), radial basis function (RBF), and the recently proposed deep learning methods including the 2D convolutional neural network (CNN) and the 3D CNN designed for HSI classification.


2021 ◽  
Author(s):  
Gargi Mishra ◽  
Supriya Bajpai

It is highly challenging to obtain high performance with limited and unconstrained data in real time face recognition applications. Sparse Approximation is a fast and computationally efficient method for the above application as it requires no training time as compared to deep learning methods. It eliminates the training time by assuming that the test image can be approximated by the sum of individual contributions of the training images from different classes and the class with maximum contribution is closest to the test image. The efficiency of the Sparse Approximation method can be further increased by providing high quality features as input for classification. Hence, we propose to integrate pre-trained CNN architecture to extract the highly discriminative features from the image dataset for Sparse classification. The proposed approach provides better performance even for one training image per class in complex environment as compared to the existing methods. Highlight of the present approach is the results obtained for LFW dataset with one and thirteen training images per class are 84.86% and 96.14% respectively, whereas the existing deep learning methods use a large amount of training data to achieve comparable results.


Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 223
Author(s):  
Yen-Ling Tai ◽  
Shin-Jhe Huang ◽  
Chien-Chang Chen ◽  
Henry Horng-Shing Lu

Nowadays, deep learning methods with high structural complexity and flexibility inevitably lean on the computational capability of the hardware. A platform with high-performance GPUs and large amounts of memory could support neural networks having large numbers of layers and kernels. However, naively pursuing high-cost hardware would probably drag the technical development of deep learning methods. In the article, we thus establish a new preprocessing method to reduce the computational complexity of the neural networks. Inspired by the band theory of solids in physics, we map the image space into a noninteraction physical system isomorphically and then treat image voxels as particle-like clusters. Then, we reconstruct the Fermi–Dirac distribution to be a correction function for the normalization of the voxel intensity and as a filter of insignificant cluster components. The filtered clusters at the circumstance can delineate the morphological heterogeneity of the image voxels. We used the BraTS 2019 datasets and the dimensional fusion U-net for the algorithmic validation, and the proposed Fermi–Dirac correction function exhibited comparable performance to other employed preprocessing methods. By comparing to the conventional z-score normalization function and the Gamma correction function, the proposed algorithm can save at least 38% of computational time cost under a low-cost hardware architecture. Even though the correction function of global histogram equalization has the lowest computational time among the employed correction functions, the proposed Fermi–Dirac correction function exhibits better capabilities of image augmentation and segmentation.


2021 ◽  
Vol 61 (2) ◽  
pp. 653-663
Author(s):  
Sankalp Jain ◽  
Vishal B. Siramshetty ◽  
Vinicius M. Alves ◽  
Eugene N. Muratov ◽  
Nicole Kleinstreuer ◽  
...  

2020 ◽  
Author(s):  
Tuan Pham

Chest X-rays have been found to be very promising for assessing COVID-19 patients, especially for resolving emergency-department and urgent-care-center overcapacity. Deep-learning (DL) methods in artificial intelligence (AI) play a dominant role as high-performance classifiers in the detection of the disease using chest X-rays. While many new DL models have been being developed for this purpose, this study aimed to investigate the fine tuning of pretrained convolutional neural networks (CNNs) for the classification of COVID-19 using chest X-rays. Three pretrained CNNs, which are AlexNet, GoogleNet, and SqueezeNet, were selected and fine-tuned without data augmentation to carry out 2-class and 3-class classification tasks using 3 public chest X-ray databases. In comparison with other recently developed DL models, the 3 pretrained CNNs achieved very high classification results in terms of accuracy, sensitivity, specificity, precision, F1 score, and area under the receiver-operating-characteristic curve. AlexNet, GoogleNet, and SqueezeNet require the least training time among pretrained DL models, but with suitable selection of training parameters, excellent classification results can be achieved without data augmentation by these networks. The findings contribute to the urgent need for harnessing the pandemic by facilitating the deployment of AI tools that are fully automated and readily available in the public domain for rapid implementation.


2020 ◽  
Vol 12 (5) ◽  
pp. 779 ◽  
Author(s):  
Bei Fang ◽  
Yunpeng Bai ◽  
Ying Li

Recently, Hyperspectral Image (HSI) classification methods based on deep learning models have shown encouraging performance. However, the limited numbers of training samples, as well as the mixed pixels due to low spatial resolution, have become major obstacles for HSI classification. To tackle these problems, we propose a resource-efficient HSI classification framework which introduces adaptive spectral unmixing into a 3D/2D dense network with early-exiting strategy. More specifically, on one hand, our framework uses a cascade of intermediate classifiers throughout the 3D/2D dense network that is trained end-to-end. The proposed 3D/2D dense network that integrates 3D convolutions with 2D convolutions is more capable of handling spectral-spatial features, while containing fewer parameters compared with the conventional 3D convolutions, and further boosts the network performance with limited training samples. On another hand, considering the existence of mixed pixels in HSI data, the pixels in HSI classification are divided into hard samples and easy samples. With the early-exiting strategy in these intermediate classifiers, the average accuracy can be improved by reducing the amount of computation cost for easy samples, thus focusing on classifying hard samples. Furthermore, for hard samples, an adaptive spectral unmixing method is proposed as a complementary source of information for classification, which brings considerable benefits to the final performance. Experimental results on four HSI benchmark datasets demonstrate that the proposed method can achieve better performance than state-of-the-art deep learning-based methods and other traditional HSI classification methods.


2020 ◽  
Vol 12 (3) ◽  
pp. 582 ◽  
Author(s):  
Rui Li ◽  
Shunyi Zheng ◽  
Chenxi Duan ◽  
Yang Yang ◽  
Xiqi Wang

In recent years, researchers have paid increasing attention on hyperspectral image (HSI) classification using deep learning methods. To improve the accuracy and reduce the training samples, we propose a double-branch dual-attention mechanism network (DBDA) for HSI classification in this paper. Two branches are designed in DBDA to capture plenty of spectral and spatial features contained in HSI. Furthermore, a channel attention block and a spatial attention block are applied to these two branches respectively, which enables DBDA to refine and optimize the extracted feature maps. A series of experiments on four hyperspectral datasets show that the proposed framework has superior performance to the state-of-the-art algorithm, especially when the training samples are signally lacking.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4975
Author(s):  
Fangyu Shi ◽  
Zhaodi Wang ◽  
Menghan Hu ◽  
Guangtao Zhai

Relying on large scale labeled datasets, deep learning has achieved good performance in image classification tasks. In agricultural and biological engineering, image annotation is time-consuming and expensive. It also requires annotators to have technical skills in specific areas. Obtaining the ground truth is difficult because natural images are expensive. In addition, images in these areas are usually stored as multichannel images, such as computed tomography (CT) images, magnetic resonance images (MRI), and hyperspectral images (HSI). In this paper, we present a framework using active learning and deep learning for multichannel image classification. We use three active learning algorithms, including least confidence, margin sampling, and entropy, as the selection criteria. Based on this framework, we further introduce an “image pool” to make full advantage of images generated by data augmentation. To prove the availability of the proposed framework, we present a case study on agricultural hyperspectral image classification. The results show that the proposed framework achieves better performance compared with the deep learning model. Manual annotation of all the training sets achieves an encouraging accuracy. In comparison, using active learning algorithm of entropy and image pool achieves a similar accuracy with only part of the whole training set manually annotated. In practical application, the proposed framework can remarkably reduce labeling effort during the model development and upadting processes, and can be applied to multichannel image classification in agricultural and biological engineering.


2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Shan Pang ◽  
Xinyi Yang

In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods.


2021 ◽  
Vol 2070 (1) ◽  
pp. 012141
Author(s):  
Pavan Sharma ◽  
Hemant Amhia ◽  
Sunil Datt Sharma

Abstract Nowadays, artificial intelligence techniques are getting popular in modern industry to diagnose the rolling bearing faults (RBFs). The RBFs occur in rotating machinery and these are common in every manufacturing industry. The diagnosis of the RBFs is highly needed to reduce the financial and production losses. Therefore, various artificial intelligence techniques such as machine and deep learning have been developed to diagnose the RBFs in the rotating machines. But, the performance of these techniques has suffered due the size of the dataset. Because, Machine learning and deep learning methods based methods are suitable for the small and large datasets respectively. Deep learning methods have also been limited to large training time. In this paper, performance of the different pre-trained models for the RBFs classification has been analysed. CWRU Dataset has been used for the performance comparison.


Sign in / Sign up

Export Citation Format

Share Document