Deep Convolutional Neural Network for Object Classification

Author(s):  
Amira Ahmad Al-Sharkawy ◽  
Gehan A. Bahgat ◽  
Elsayed E. Hemayed ◽  
Samia Abdel-Razik Mashali

Object classification problem is essential in many applications nowadays. Human can easily classify objects in unconstrained environments easily. Classical classification techniques were far away from human performance. Thus, researchers try to mimic the human visual system till they reached the deep neural networks. This chapter gives a review and analysis in the field of the deep convolutional neural network usage in object classification under constrained and unconstrained environment. The chapter gives a brief review on the classical techniques of object classification and the development of bio-inspired computational models from neuroscience till the creation of deep neural networks. A review is given on the constrained environment issues: the hardware computing resources and memory, the object appearance and background, and the training and processing time. Datasets that are used to test the performance are analyzed according to the images environmental conditions, besides the dataset biasing is discussed.

Author(s):  
E. Yu. Shchetinin

The recognition of human emotions is one of the most relevant and dynamically developing areas of modern speech technologies, and the recognition of emotions in speech (RER) is the most demanded part of them. In this paper, we propose a computer model of emotion recognition based on an ensemble of bidirectional recurrent neural network with LSTM memory cell and deep convolutional neural network ResNet18. In this paper, computer studies of the RAVDESS database containing emotional speech of a person are carried out. RAVDESS-a data set containing 7356 files. Entries contain the following emotions: 0 – neutral, 1 – calm, 2 – happiness, 3 – sadness, 4 – anger, 5 – fear, 6 – disgust, 7 – surprise. In total, the database contains 16 classes (8 emotions divided into male and female) for a total of 1440 samples (speech only). To train machine learning algorithms and deep neural networks to recognize emotions, existing audio recordings must be pre-processed in such a way as to extract the main characteristic features of certain emotions. This was done using Mel-frequency cepstral coefficients, chroma coefficients, as well as the characteristics of the frequency spectrum of audio recordings. In this paper, computer studies of various models of neural networks for emotion recognition are carried out on the example of the data described above. In addition, machine learning algorithms were used for comparative analysis. Thus, the following models were trained during the experiments: logistic regression (LR), classifier based on the support vector machine (SVM), decision tree (DT), random forest (RF), gradient boosting over trees – XGBoost, convolutional neural network CNN, recurrent neural network RNN (ResNet18), as well as an ensemble of convolutional and recurrent networks Stacked CNN-RNN. The results show that neural networks showed much higher accuracy in recognizing and classifying emotions than the machine learning algorithms used. Of the three neural network models presented, the CNN + BLSTM ensemble showed higher accuracy.


2020 ◽  
Vol 10 (6) ◽  
pp. 2104
Author(s):  
Michał Tomaszewski ◽  
Paweł Michalski ◽  
Jakub Osuchowski

This article presents an analysis of the effectiveness of object detection in digital images with the application of a limited quantity of input. The possibility of using a limited set of learning data was achieved by developing a detailed scenario of the task, which strictly defined the conditions of detector operation in the considered case of a convolutional neural network. The described solution utilizes known architectures of deep neural networks in the process of learning and object detection. The article presents comparisons of results from detecting the most popular deep neural networks while maintaining a limited training set composed of a specific number of selected images from diagnostic video. The analyzed input material was recorded during an inspection flight conducted along high-voltage lines. The object detector was built for a power insulator. The main contribution of the presented papier is the evidence that a limited training set (in our case, just 60 training frames) could be used for object detection, assuming an outdoor scenario with low variability of environmental conditions. The decision of which network will generate the best result for such a limited training set is not a trivial task. Conducted research suggests that the deep neural networks will achieve different levels of effectiveness depending on the amount of training data. The most beneficial results were obtained for two convolutional neural networks: the faster region-convolutional neural network (faster R-CNN) and the region-based fully convolutional network (R-FCN). Faster R-CNN reached the highest AP (average precision) at a level of 0.8 for 60 frames. The R-FCN model gained a worse AP result; however, it can be noted that the relationship between the number of input samples and the obtained results has a significantly lower influence than in the case of other CNN models, which, in the authors’ assessment, is a desired feature in the case of a limited training set.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Pei Yang ◽  
Yong Pi ◽  
Tao He ◽  
Jiangming Sun ◽  
Jianan Wei ◽  
...  

Abstract Background 99mTc-pertechnetate thyroid scintigraphy is a valid complementary avenue for evaluating thyroid disease in the clinic, the image feature of thyroid scintigram is relatively simple but the interpretation still has a moderate consistency among physicians. Thus, we aimed to develop an artificial intelligence (AI) system to automatically classify the four patterns of thyroid scintigram. Methods We collected 3087 thyroid scintigrams from center 1 to construct the training dataset (n = 2468) and internal validating dataset (n = 619), and another 302 cases from center 2 as external validating datasets. Four pre-trained neural networks that included ResNet50, DenseNet169, InceptionV3, and InceptionResNetV2 were implemented to construct AI models. The models were trained separately with transfer learning. We evaluated each model’s performance with metrics as following: accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), recall, precision, and F1-score. Results The overall accuracy of four pre-trained neural networks in classifying four common uptake patterns of thyroid scintigrams all exceeded 90%, and the InceptionV3 stands out from others. It reached the highest performance with an overall accuracy of 92.73% for internal validation and 87.75% for external validation, respectively. As for each category of thyroid scintigrams, the area under the receiver operator characteristic curve (AUC) was 0.986 for ‘diffusely increased,’ 0.997 for ‘diffusely decreased,’ 0.998 for ‘focal increased,’ and 0.945 for ‘heterogeneous uptake’ in internal validation, respectively. Accordingly, the corresponding performances also obtained an ideal result of 0.939, 1.000, 0.974, and 0.915 in external validation, respectively. Conclusions Deep convolutional neural network-based AI model represented considerable performance in the classification of thyroid scintigrams, which may help physicians improve the interpretation of thyroid scintigrams more consistently and efficiently.


Author(s):  
Shweta Dabetwar ◽  
Stephen Ekwaro-Osire ◽  
João Paulo Dias

Abstract Composite materials have enormous applications in various fields. Thus, it is important to have an efficient damage detection method to avoid catastrophic failures. Due to the existence of multiple damage modes and the availability of data in different formats, it is important to employ efficient techniques to consider all the types of damage. Deep neural networks were seen to exhibit the ability to address similar complex problems. The research question in this work is ‘Can data fusion improve damage classification using the convolutional neural network?’ The specific aims developed were to 1) assess the performance of image encoding algorithms, 2) classify the damage using data from separate experimental coupons, and 3) classify the damage using mixed data from multiple experimental coupons. Two different experimental measurements were taken from NASA Ames Prognostic Repository for Carbon Fiber Reinforced polymer. To use data fusion, the piezoelectric signals were converted into images using Gramian Angular Field (GAF) and Markov Transition Field. Using data fusion techniques, the input dataset was created for a convolutional neural network with three hidden layers to determine the damage states. The accuracies of all the image encoding algorithms were compared. The analysis showed that data fusion provided better results as it contained more information on the damages modes that occur in composite materials. Additionally, GAF was shown to perform the best. Thus, the combination of data fusion and deep neural network techniques provides an efficient method for damage detection of composite materials.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Xuhui Fu

In recent years, deep learning, as a very popular artificial intelligence method, can be said to be a small area in the field of image recognition. It is a type of machine learning, actually derived from artificial neural networks, and is a method used to learn the characteristics of sample data. It is a multilayer network, which can learn the information from the bottom to the top of the image through the multilayer network, so as to extract the characteristics of the sample, and then perform identification and classification. The purpose of deep learning is to make the machine have the same analytical and learning capabilities as the human brain. The ability of deep learning in data processing (including images) is unmatched by other methods, and its achievements in recent years have left other methods behind. This article comprehensively reviews the application research progress of deep convolutional neural networks in ancient Chinese pattern restoration and mainly focuses on the research based on deep convolutional neural networks. The main tasks are as follows: (1) a detailed and comprehensive introduction to the basic knowledge of deep convolutional neural and a summary of related algorithms along the three directions of text preprocessing, learning, and neural networks are provided. This article focuses on the related mechanism of traditional pattern repair based on deep convolutional neural network and analyzes the key structure and principle. (2) Research on image restoration models based on deep convolutional networks and adversarial neural networks is carried out. The model is mainly composed of four parts, namely, information masking, feature extraction, generating network, and discriminant network. The main functions of each part are independent and interdependent. (3) The method based on the deep convolutional neural network and the other two methods are tested on the same part of the Qinghai traditional embroidery image data set. From the final evaluation index of the experiment, the method in this paper has better evaluation index than the traditional image restoration method based on samples and the image restoration method based on deep learning. In addition, from the actual image restoration effect, the method in this paper has a better image restoration effect than the other two methods, and the restoration results produced are more in line with the habit of human observation with the naked eye.


Author(s):  
Tryambak Gangopadhyay ◽  
Anthony Locurto ◽  
Paige Boor ◽  
James B. Michael ◽  
Soumik Sarkar

Detecting the transition to an impending instability is important to initiate effective control in a combustion system. As one of the early applications of characterizing thermoacoustic instability using Deep Neural Networks, we train our proposed deep convolutional neural network (CNN) model on sequential image frames extracted from hi-speed flame videos by inducing instability in the system following a particular protocol — varying the acoustic length. We leverage the sound pressure data to define a non-dimensional instability measure used for applying an inexpensive but noisy labeling technique to train our supervised 2D CNN model. We attempt to detect the onset of instability in a transient dataset where instability is induced by a different protocol. With the continuous variation of the control parameter, we can successfully detect the critical transition to a state of high combustion instability demonstrating the robustness of our proposed detection framework, which is independent of the combustion inducing protocol.


2020 ◽  
Author(s):  
Albahli Saleh ◽  
Ali Alkhalifah

BACKGROUND To diagnose cardiothoracic diseases, a chest x-ray (CXR) is examined by a radiologist. As more people get affected, doctors are becoming scarce especially in developing countries. However, with the advent of image processing tools, the task of diagnosing these cardiothoracic diseases has seen great progress. A lot of researchers have put in work to see how the problems associated with medical images can be mitigated by using neural networks. OBJECTIVE Previous works used state-of-the-art techniques and got effective results with one or two cardiothoracic diseases but could lead to misclassification. In our work, we adopted GANs to synthesize the chest radiograph (CXR) to augment the training set on multiple cardiothoracic diseases to efficiently diagnose the chest diseases in different classes as shown in Figure 1. In this regard, our major contributions are classifying various cardiothoracic diseases to detect a specific chest disease based on CXR, use the advantage of GANs to overcome the shortages of small training datasets, address the problem of imbalanced data; and implementing optimal deep neural network architecture with different hyper-parameters to improve the model with the best accuracy. METHODS For this research, we are not building a model from scratch due to computational restraints as they require very high-end computers. Rather, we use a Convolutional Neural Network (CNN) as a class of deep neural networks to propose a generative adversarial network (GAN) -based model to generate synthetic data for training the data as the amount of the data is limited. We will use pre-trained models which are models that were trained on a large benchmark dataset to solve a problem similar to the one we want to solve. For example, the ResNet-152 model we used was initially trained on the ImageNet dataset. RESULTS After successful training and validation of the models we developed, ResNet-152 with image augmentation proved to be the best model for the automatic detection of cardiothoracic disease. However, one of the main problems associated with radiographic deep learning projects and research is the scarcity and unavailability of enough datasets which is a key component of all deep learning models as they require a lot of data for training. This is the reason why some of our models had image augmentation to increase the number of images without duplication. As more data are collected in the field of chest radiology, the models could be retrained to improve the accuracies of the models as deep learning models improve with more data. CONCLUSIONS This research employs the advantages of computer vision and medical image analysis to develop an automated model that has the clinical potential for early detection of the disease. Using deep learning models, the research aims to evaluate the effectiveness and accuracy of different convolutional neural network models in the automatic diagnosis of cardiothoracic diseases from x-ray images compared to diagnosis by experts in the medical community.


2016 ◽  
Vol 2016 ◽  
pp. 1-15 ◽  
Author(s):  
Benjamin Chandler ◽  
Ennio Mingolla

Heavily occluded objects are more difficult for classification algorithms to identify correctly than unoccluded objects. This effect is rare and thus hard to measure with datasets like ImageNet and PASCAL VOC, however, owing to biases in human-generated image pose selection. We introduce a dataset that emphasizes occlusion and additions to a standard convolutional neural network aimed at increasing invariance to occlusion. An unmodified convolutional neural network trained and tested on the new dataset rapidly degrades to chance-level accuracy as occlusion increases. Training with occluded data slows this decline but still yields poor performance with high occlusion. Integrating novel preprocessing stages to segment the input and inpaint occlusions is an effective mitigation. A convolutional network so modified is nearly as effective with more than 81% of pixels occluded as it is with no occlusion. Such a network is also more accurate on unoccluded images than an otherwise identical network that has been trained with only unoccluded images. These results depend on successful segmentation. The occlusions in our dataset are deliberately easy to segment from the figure and background. Achieving similar results on a more challenging dataset would require finding a method to split figure, background, and occluding pixels in the input.


IoT ◽  
2021 ◽  
Vol 2 (2) ◽  
pp. 222-235
Author(s):  
Guillaume Coiffier ◽  
Ghouthi Boukli Hacene ◽  
Vincent Gripon

Deep Neural Networks are state-of-the-art in a large number of challenges in machine learning. However, to reach the best performance they require a huge pool of parameters. Indeed, typical deep convolutional architectures present an increasing number of feature maps as we go deeper in the network, whereas spatial resolution of inputs is decreased through downsampling operations. This means that most of the parameters lay in the final layers, while a large portion of the computations are performed by a small fraction of the total parameters in the first layers. In an effort to use every parameter of a network at its maximum, we propose a new convolutional neural network architecture, called ThriftyNet. In ThriftyNet, only one convolutional layer is defined and used recursively, leading to a maximal parameter factorization. In complement, normalization, non-linearities, downsamplings and shortcut ensure sufficient expressivity of the model. ThriftyNet achieves competitive performance on a tiny parameters budget, exceeding 91% accuracy on CIFAR-10 with less than 40 k parameters in total, 74.3% on CIFAR-100 with less than 600 k parameters, and 67.1% On ImageNet ILSVRC 2012 with no more than 4.15 M parameters. However, the proposed method typically requires more computations than existing counterparts.


Sign in / Sign up

Export Citation Format

Share Document