Performance comparison of deep learning frameworks in image classification problems using convolutional and recurrent networks

Author(s):  
Ruben D. Fonnegra ◽  
Bryan Blair ◽  
Gloria M. Diaz
2021 ◽  
Vol 11 (9) ◽  
pp. 3863
Author(s):  
Ali Emre Öztürk ◽  
Ergun Erçelebi

A large amount of training image data is required for solving image classification problems using deep learning (DL) networks. In this study, we aimed to train DL networks with synthetic images generated by using a game engine and determine the effects of the networks on performance when solving real-image classification problems. The study presents the results of using corner detection and nearest three-point selection (CDNTS) layers to classify bird and rotary-wing unmanned aerial vehicle (RW-UAV) images, provides a comprehensive comparison of two different experimental setups, and emphasizes the significant improvements in the performance in deep learning-based networks due to the inclusion of a CDNTS layer. Experiment 1 corresponds to training the commonly used deep learning-based networks with synthetic data and an image classification test on real data. Experiment 2 corresponds to training the CDNTS layer and commonly used deep learning-based networks with synthetic data and an image classification test on real data. In experiment 1, the best area under the curve (AUC) value for the image classification test accuracy was measured as 72%. In experiment 2, using the CDNTS layer, the AUC value for the image classification test accuracy was measured as 88.9%. A total of 432 different combinations of trainings were investigated in the experimental setups. The experiments were trained with various DL networks using four different optimizers by considering all combinations of batch size, learning rate, and dropout hyperparameters. The test accuracy AUC values for networks in experiment 1 ranged from 55% to 74%, whereas the test accuracy AUC values in experiment 2 networks with a CDNTS layer ranged from 76% to 89.9%. It was observed that the CDNTS layer has considerable effects on the image classification accuracy performance of deep learning-based networks. AUC, F-score, and test accuracy measures were used to validate the success of the networks.


Entropy ◽  
2019 ◽  
Vol 21 (5) ◽  
pp. 456 ◽  
Author(s):  
Hao Cheng ◽  
Dongze Lian ◽  
Shenghua Gao ◽  
Yanlin Geng

Inspired by the pioneering work of the information bottleneck (IB) principle for Deep Neural Networks’ (DNNs) analysis, we thoroughly study the relationship among the model accuracy, I ( X ; T ) and I ( T ; Y ) , where I ( X ; T ) and I ( T ; Y ) are the mutual information of DNN’s output T with input X and label Y. Then, we design an information plane-based framework to evaluate the capability of DNNs (including CNNs) for image classification. Instead of each hidden layer’s output, our framework focuses on the model output T. We successfully apply our framework to many application scenarios arising in deep learning and image classification problems, such as image classification with unbalanced data distribution, model selection, and transfer learning. The experimental results verify the effectiveness of the information plane-based framework: Our framework may facilitate a quick model selection and determine the number of samples needed for each class in the unbalanced classification problem. Furthermore, the framework explains the efficiency of transfer learning in the deep learning area.


2019 ◽  
Vol 108 ◽  
pp. 49-56 ◽  
Author(s):  
A. Inés ◽  
C. Domínguez ◽  
J. Heras ◽  
E. Mata ◽  
V. Pascual

2020 ◽  
Vol 10 (9) ◽  
pp. 2027-2031
Author(s):  
Xu Yifang

Hyperspectral image classification refers to a key difficulty on the domain of remote sensing image processing. Feature learning is the basis of hyperspectral image classification problems. In addition, how to jointly use the space spectrum information is Also an important issue in hyperspectral image classification. Recent ages have seen that as further exploration is developing, the method of hyperspectral image cauterization according to deep learning has been rapidly developed. However, existing deep networks often only consider reconstruction performance while ignoring the task itself. In addition, for improving preciseness of classification, most categorization methods use the fixed-size neighborhood of per hyperspectral pixel as the object of feature extraction, ignoring the identification and difference between the neighborhood pixel and the current pixel. On the basis of exploration above, our research group put forward with an image classification algorithm based on principal component texture feature deep learning, and achieved good results.


Recently, diatoms, a type of algae microorganism with numerous species, are relatively helpful for water quality determination, and is treated as an important topic in applied biology nowadays. Simultaneously, deep learning (DL) also becomes an important model applied for various image classification problems. This study introduces a new Inception model for diatom image classification. The presented model involves two main stages namely segmentation and classification. Here, a deep learning based Inception model is employed for classification purposes. To further improve the classifier efficiency, edge detection based segmentation model is also applied where the segmented input is provided as an input to the classifier stage. An experimental validation takes place on diverse set of diatom dataset with various preprocessing models. The results pointed out that the presented DL model shows extraordinary classification performance with a classifier accuracy of 99%.


2019 ◽  
Vol 2 ◽  
pp. 1-8
Author(s):  
Shu Su ◽  
Takahiko Nawata

<p><strong>Abstract.</strong> In this paper, we present a novel approach for demolished building detection using bi-temporal aerial images and building boundary polygon data. The building boundary polygon data can enable the proposed method to distinguish buildings from non-buildings. Moreover, it can enable the exclusion of non-building changes such as those caused by changes in tree cover, roads, and vegetation. The results of demolished building detection can be achieved by using the building-base. The proposed method classifies each building as demolished or undemolished. The architectures, which based on U-Net and VGG19, are implemented for realizing automatic demolished building detection. The result suggested that U-Net is a useful architecture for image classification problems as well as for semantic segmentation tasks. In order to verify the effectiveness of proposed method, the detection performance is evaluated using images of an entire city. The results suggest that the proposed method can accurately detect demolished buildings with a low mis-detection rate and low over-detection rate.</p>


Author(s):  
Sumit Kaur

Abstract- Deep learning is an emerging research area in machine learning and pattern recognition field which has been presented with the goal of drawing Machine Learning nearer to one of its unique objectives, Artificial Intelligence. It tries to mimic the human brain, which is capable of processing and learning from the complex input data and solving different kinds of complicated tasks well. Deep learning (DL) basically based on a set of supervised and unsupervised algorithms that attempt to model higher level abstractions in data and make it self-learning for hierarchical representation for classification. In the recent years, it has attracted much attention due to its state-of-the-art performance in diverse areas like object perception, speech recognition, computer vision, collaborative filtering and natural language processing. This paper will present a survey on different deep learning techniques for remote sensing image classification. 


2020 ◽  
Vol 26 ◽  
Author(s):  
Xiaoping Min ◽  
Fengqing Lu ◽  
Chunyan Li

: Enhancer-promoter interactions (EPIs) in the human genome are of great significance to transcriptional regulation which tightly controls gene expression. Identification of EPIs can help us better deciphering gene regulation and understanding disease mechanisms. However, experimental methods to identify EPIs are constrained by the fund, time and manpower while computational methods using DNA sequences and genomic features are viable alternatives. Deep learning methods have shown promising prospects in classification and efforts that have been utilized to identify EPIs. In this survey, we specifically focus on sequence-based deep learning methods and conduct a comprehensive review of the literatures of them. We first briefly introduce existing sequence-based frameworks on EPIs prediction and their technique details. After that, we elaborate on the dataset, pre-processing means and evaluation strategies. Finally, we discuss the challenges these methods are confronted with and suggest several future opportunities.


Sign in / Sign up

Export Citation Format

Share Document