scholarly journals Multi-scale 3D-convolutional neural network for hyperspectral image classification

Author(s):  
Murali Kanthi ◽  
Thogarcheti Hitendra Sarma ◽  
Chigarapalle Shoba Bindu

Deep Learning methods are state-of-the-art approaches for pixel-based hyperspectral images (HSI) classification. High classification accuracy has been achieved by extracting deep features from both spatial-spectral channels. However, the efficiency of such spatial-spectral approaches depends on the spatial dimension of each patch and there is no theoretically valid approach to find the optimum spatial dimension to be considered. It is more valid to extract spatial features by considering varying neighborhood scales in spatial dimensions. In this regard, this article proposes a deep convolutional neural network (CNN) model wherein three different multi-scale spatial-spectral patches are used to extract the features in both the spatial and spectral channels. In order to extract these potential features, the proposed deep learning architecture takes three patches various scales in spatial dimension. 3D convolution is performed on each selected patch and the process runs through entire image. The proposed is named as multi-scale three-dimensional convolutional neural network (MS-3DCNN). The efficiency of the proposed model is being verified through the experimental studies on three publicly available benchmark datasets including Pavia University, Indian Pines, and Salinas. It is empirically proved that the classification accuracy of the proposed model is improved when compared with the remaining state-of-the-art methods.

2021 ◽  
Vol 13 (3) ◽  
pp. 335
Author(s):  
Yuhao Qing ◽  
Wenyi Liu

In recent years, image classification on hyperspectral imagery utilizing deep learning algorithms has attained good results. Thus, spurred by that finding and to further improve the deep learning classification accuracy, we propose a multi-scale residual convolutional neural network model fused with an efficient channel attention network (MRA-NET) that is appropriate for hyperspectral image classification. The suggested technique comprises a multi-staged architecture, where initially the spectral information of the hyperspectral image is reduced into a two-dimensional tensor, utilizing a principal component analysis (PCA) scheme. Then, the constructed low-dimensional image is input to our proposed ECA-NET deep network, which exploits the advantages of its core components, i.e., multi-scale residual structure and attention mechanisms. We evaluate the performance of the proposed MRA-NET on three public available hyperspectral datasets and demonstrate that, overall, the classification accuracy of our method is 99.82 %, 99.81%, and 99.37, respectively, which is higher compared to the corresponding accuracy of current networks such as 3D convolutional neural network (CNN), three-dimensional residual convolution structure (RES-3D-CNN), and space–spectrum joint deep network (SSRN).


Author(s):  
T. Jiang ◽  
X. J. Wang

Abstract. In recent years, deep learning technology has been continuously developed and gradually transferred to various fields. Among them, Convolutional Neural Network (CNN), which has the ability to extract deep features of images due to its unique network structure, plays an increasingly important role in the realm of Hyperspectral images classification. This paper attempts to construct a features fusion model that combines the deep features derived from 1D-CNN and 2D-CNN, and explores the potential of features fusion model in the field of hyperspectral image classification. The experiment is based on the deep learning open source framework TensorFlow with Python3 as programming environment. Firstly, constructing multi-layer perceptron (MLP), 1D-CNN and 2DCNN models respectively, and then, using the pre-trained 1D-CNN and 2D-CNN models as feature extractors, finally, extracting features via constructing the features fusion model. The general open hyperspectral datasets (Pavia University) were selected as a test to compare classification accuracy and classification confidence among different models. The experimental results show that the features fusion model obtains higher overall accuracy (99.65%), Kappa coefficient (0.9953) and lower uncertainty for the boundary and unknown regions (3.43%) in the data set. Since features fusion model inherits the structural characteristics of 1D-CNN and 2DCNN, the complementary advantages between the models are achieved. The spectral and spatial features of hyperspectral images are fully exploited, thus getting state-of-the-art classification accuracy and generalization performance.


2021 ◽  
Vol 11 (10) ◽  
pp. 2618-2625
Author(s):  
R. T. Subhalakshmi ◽  
S. Appavu Alias Balamurugan ◽  
S. Sasikala

In recent times, the COVID-19 epidemic turn out to be increased in an extreme manner, by the accessibility of an inadequate amount of rapid testing kits. Consequently, it is essential to develop the automated techniques for Covid-19 detection to recognize the existence of disease from the radiological images. The most ordinary symptoms of COVID-19 are sore throat, fever, and dry cough. Symptoms are able to progress to a rigorous type of pneumonia with serious impediment. As medical imaging is not recommended currently in Canada for crucial COVID-19 diagnosis, systems of computer-aided diagnosis might aid in early COVID-19 abnormalities detection and help out to observe the disease progression, reduce mortality rates potentially. In this approach, a deep learning based design for feature extraction and classification is employed for automatic COVID-19 diagnosis from computed tomography (CT) images. The proposed model operates on three main processes based pre-processing, feature extraction, and classification. The proposed design incorporates the fusion of deep features using GoogLe Net models. Finally, Multi-scale Recurrent Neural network (RNN) based classifier is applied for identifying and classifying the test CT images into distinct class labels. The experimental validation of the proposed model takes place using open-source COVID-CT dataset, which comprises a total of 760 CT images. The experimental outcome defined the superior performance with the maximum sensitivity, specificity, and accuracy.


2020 ◽  
Vol 2020 ◽  
pp. 1-7
Author(s):  
Ahmed Jawad A. AlBdairi ◽  
Zhu Xiao ◽  
Mohammed Alghaili

The interest in face recognition studies has grown rapidly in the last decade. One of the most important problems in face recognition is the identification of ethnics of people. In this study, a new deep learning convolutional neural network is designed to create a new model that can recognize the ethnics of people through their facial features. The new dataset for ethnics of people consists of 3141 images collected from three different nationalities. To the best of our knowledge, this is the first image dataset collected for the ethnics of people and that dataset will be available for the research community. The new model was compared with two state-of-the-art models, VGG and Inception V3, and the validation accuracy was calculated for each convolutional neural network. The generated models have been tested through several images of people, and the results show that the best performance was achieved by our model with a verification accuracy of 96.9%.


2019 ◽  
Vol 11 (9) ◽  
pp. 1006 ◽  
Author(s):  
Quanlong Feng ◽  
Jianyu Yang ◽  
Dehai Zhu ◽  
Jiantao Liu ◽  
Hao Guo ◽  
...  

Coastal land cover classification is a significant yet challenging task in remote sensing because of the complex and fragmented nature of coastal landscapes. However, availability of multitemporal and multisensor remote sensing data provides opportunities to improve classification accuracy. Meanwhile, rapid development of deep learning has achieved astonishing results in computer vision tasks and has also been a popular topic in the field of remote sensing. Nevertheless, designing an effective and concise deep learning model for coastal land cover classification remains problematic. To tackle this issue, we propose a multibranch convolutional neural network (MBCNN) for the fusion of multitemporal and multisensor Sentinel data to improve coastal land cover classification accuracy. The proposed model leverages a series of deformable convolutional neural networks to extract representative features from a single-source dataset. Extracted features are aggregated through an adaptive feature fusion module to predict final land cover categories. Experimental results indicate that the proposed MBCNN shows good performance, with an overall accuracy of 93.78% and a Kappa coefficient of 0.9297. Inclusion of multitemporal data improves accuracy by an average of 6.85%, while multisensor data contributes to 3.24% of accuracy increase. Additionally, the featured fusion module in this study also increases accuracy by about 2% when compared with the feature-stacking method. Results demonstrate that the proposed method can effectively mine and fuse multitemporal and multisource Sentinel data, which improves coastal land cover classification accuracy.


2020 ◽  
Vol 10 (11) ◽  
pp. 2733-2738
Author(s):  
Yanxia Sun ◽  
Peiqing ◽  
Xiaoxu Geng ◽  
Haiying Wang ◽  
Jinke Wang ◽  
...  

Accurate optic cup and optic disc (OC, OD) segmentation is the prerequisite for cup-disc ratio (CDR) calculation. In this paper, a new full convolutional neural network (FCN) with multi-scale residual module is proposed. Firstly, polar coordinate transformation was introduced to balance the CDR with space constraints, and CLAHE was implemented in fundus images for contrast enhancement. Secondly, W-Net-R model was proposed as the main framework, while the standard convolution unit was replaced by the multi-scale residual module. Finally, the multi-label cost function is utilized to guide its functioning. In the experiment, the REFUGE dataset was used for training, validation and testing. We obtained 0.979 and 0.904 for OD and OC segmentations on MIoU, which indicates a relative improvement of 4.04% and 3.55%, comparing with that of U-Net, respectively. Experiment results proved that our proposed method is superior to other state-of-the-art schemes on OC and OD segmentation, and could be a potential prospective tool for early screening of glaucoma.


Author(s):  
M A Isayev ◽  
D A Savelyev

The comparison of different convolutional neural networks which are the core of the most actual solutions in the computer vision area is considers in hhe paper. The study includes benchmarks of this state-of-the-art solutions by some criteria, such as mAP (mean average precision), FPS (frames per seconds), for the possibility of real-time usability. It is concluded on the best convolutional neural network model and deep learning methods that were used at particular solution.


2021 ◽  
Vol 11 (21) ◽  
pp. 10301
Author(s):  
Muhammad Shoaib Farooq ◽  
Attique Ur Rehman ◽  
Muhammad Idrees ◽  
Muhammad Ahsan Raza ◽  
Jehad Ali ◽  
...  

COVID-19 has been difficult to diagnose and treat at an early stage all over the world. The numbers of patients showing symptoms for COVID-19 have caused medical facilities at hospitals to become unavailable or overcrowded, which is a major challenge. Studies have recently allowed us to determine that COVID-19 can be diagnosed with the aid of chest X-ray images. To combat the COVID-19 outbreak, developing a deep learning (DL) based model for automated COVID-19 diagnosis on chest X-ray is beneficial. In this research, we have proposed a customized convolutional neural network (CNN) model to detect COVID-19 from chest X-ray images. The model is based on nine layers which uses a binary classification method to differentiate between COVID-19 and normal chest X-rays. It provides COVID-19 detection early so the patients can be admitted in a timely fashion. The proposed model was trained and tested on two publicly available datasets. Cross-dataset studies are used to assess the robustness in a real-world context. Six hundred X-ray images were used for training and two hundred X-rays were used for validation of the model. The X-ray images of the dataset were preprocessed to improve the results and visualized for better analysis. The developed algorithm reached 98% precision, recall and f1-score. The cross-dataset studies also demonstrate the resilience of deep learning algorithms in a real-world context with 98.5 percent accuracy. Furthermore, a comparison table was created which shows that our proposed model outperforms other relative models in terms of accuracy. The quick and high-performance of our proposed DL-based customized model identifies COVID-19 patients quickly, which is helpful in controlling the COVID-19 outbreak.


2020 ◽  
Author(s):  
Pushkar Khairnar ◽  
Ponkrshnan Thiagarajan ◽  
Susanta Ghosh

Convolutional neural network (CNN) based classification models have been successfully used on histopathological images for the detection of diseases. Despite its success, CNN may yield erroneous or overfitted results when the data is not sufficiently large or is biased. To overcome these limitations of CNN and to provide uncertainty quantification Bayesian CNN is recently proposed. However, we show that Bayesian-CNN still suffers from inaccuracies, especially in negative predictions. In the present work, we extend the Bayesian-CNN to improve accuracy and the rate of convergence. The proposed model is called modified Bayesian-CNN. The novelty of the proposed model lies in an adaptive activation function that contains a learnable parameter for each of the neurons. This adaptive activation function dynamically changes the loss function thereby providing faster convergence and better accuracy. The uncertainties associated with the predictions are obtained since the model learns a probability distribution on the network parameters. It reduces overfitting through an ensemble averaging over networks, which in turn improves accuracy on the unknown data. The proposed model demonstrates significant improvement by nearly eliminating overfitting and remarkably reducing (about 38%) the number of false-negative predictions. We found that the proposed model predicts higher uncertainty for images having features of both the classes. The uncertainty in the predictions of individual images can be used to decide when further human-expert intervention is needed. These findings have the potential to advance the state-of-the-art machine learning-based automatic classification for histopathological images.


Sign in / Sign up

Export Citation Format

Share Document