scholarly journals Empirical Evaluation of the Effect of Optimization and Regularization Techniques on the Generalization Performance of Deep Convolutional Neural Network

2020 ◽  
Vol 10 (21) ◽  
pp. 7817
Author(s):  
Ivana Marin ◽  
Ana Kuzmanic Skelin ◽  
Tamara Grujic

The main goal of any classification or regression task is to obtain a model that will generalize well on new, previously unseen data. Due to the recent rise of deep learning and many state-of-the-art results obtained with deep models, deep learning architectures have become one of the most used model architectures nowadays. To generalize well, a deep model needs to learn the training data well without overfitting. The latter implies a correlation of deep model optimization and regularization with generalization performance. In this work, we explore the effect of the used optimization algorithm and regularization techniques on the final generalization performance of the model with convolutional neural network (CNN) architecture widely used in the field of computer vision. We give a detailed overview of optimization and regularization techniques with a comparative analysis of their performance with three CNNs on the CIFAR-10 and Fashion-MNIST image datasets.

2021 ◽  
Vol 13 (19) ◽  
pp. 3859
Author(s):  
Joby M. Prince Czarnecki ◽  
Sathishkumar Samiappan ◽  
Meilun Zhou ◽  
Cary Daniel McCraine ◽  
Louis L. Wasson

The radiometric quality of remotely sensed imagery is crucial for precision agriculture applications because estimations of plant health rely on the underlying quality. Sky conditions, and specifically shadowing from clouds, are critical determinants in the quality of images that can be obtained from low-altitude sensing platforms. In this work, we first compare common deep learning approaches to classify sky conditions with regard to cloud shadows in agricultural fields using a visible spectrum camera. We then develop an artificial-intelligence-based edge computing system to fully automate the classification process. Training data consisting of 100 oblique angle images of the sky were provided to a convolutional neural network and two deep residual neural networks (ResNet18 and ResNet34) to facilitate learning two classes, namely (1) good image quality expected, and (2) degraded image quality expected. The expectation of quality stemmed from the sky condition (i.e., density, coverage, and thickness of clouds) present at the time of the image capture. These networks were tested using a set of 13,000 images. Our results demonstrated that ResNet18 and ResNet34 classifiers produced better classification accuracy when compared to a convolutional neural network classifier. The best overall accuracy was obtained by ResNet34, which was 92% accurate, with a Kappa statistic of 0.77. These results demonstrate a low-cost solution to quality control for future autonomous farming systems that will operate without human intervention and supervision.


Author(s):  
Uzma Batool ◽  
Mohd Ibrahim Shapiai ◽  
Nordinah Ismail ◽  
Hilman Fauzi ◽  
Syahrizal Salleh

Silicon wafer defect data collected from fabrication facilities is intrinsically imbalanced because of the variable frequencies of defect types. Frequently occurring types will have more influence on the classification predictions if a model gets trained on such skewed data. A fair classifier for such imbalanced data requires a mechanism to deal with type imbalance in order to avoid biased results. This study has proposed a convolutional neural network for wafer map defect classification, employing oversampling as an imbalance addressing technique. To have an equal participation of all classes in the classifier’s training, data augmentation has been employed, generating more samples in minor classes. The proposed deep learning method has been evaluated on a real wafer map defect dataset and its classification results on the test set returned a 97.91% accuracy. The results were compared with another deep learning based auto-encoder model demonstrating the proposed method, a potential approach for silicon wafer defect classification that needs to be investigated further for its robustness.


2022 ◽  
pp. 1559-1575
Author(s):  
Mário Pereira Véstias

Machine learning is the study of algorithms and models for computing systems to do tasks based on pattern identification and inference. When it is difficult or infeasible to develop an algorithm to do a particular task, machine learning algorithms can provide an output based on previous training data. A well-known machine learning model is deep learning. The most recent deep learning models are based on artificial neural networks (ANN). There exist several types of artificial neural networks including the feedforward neural network, the Kohonen self-organizing neural network, the recurrent neural network, the convolutional neural network, the modular neural network, among others. This article focuses on convolutional neural networks with a description of the model, the training and inference processes and its applicability. It will also give an overview of the most used CNN models and what to expect from the next generation of CNN models.


2021 ◽  
Vol 8 (3) ◽  
pp. 619
Author(s):  
Candra Dewi ◽  
Andri Santoso ◽  
Indriati Indriati ◽  
Nadia Artha Dewi ◽  
Yoke Kusuma Arbawa

<p>Semakin meningkatnya jumlah penderita diabetes menjadi salah satu faktor penyebab semakin tingginya penderita penyakit <em>diabetic retinophaty</em>. Salah satu citra yang digunakan oleh dokter mata untuk mengidentifikasi <em>diabetic retinophaty</em> adalah foto retina. Dalam penelitian ini dilakukan pengenalan penyakit diabetic retinophaty secara otomatis menggunakan citra <em>fundus</em> retina dan algoritme <em>Convolutional Neural Network</em> (CNN) yang merupakan variasi dari algoritme Deep Learning. Kendala yang ditemukan dalam proses pengenalan adalah warna retina yang cenderung merah kekuningan sehingga ruang warna RGB tidak menghasilkan akurasi yang optimal. Oleh karena itu, dalam penelitian ini dilakukan pengujian pada berbagai ruang warna untuk mendapatkan hasil yang lebih baik. Dari hasil uji coba menggunakan 1000 data pada ruang warna RGB, HSI, YUV dan L*a*b* memberikan hasil yang kurang optimal pada data seimbang dimana akurasi terbaik masih dibawah 50%. Namun pada data tidak seimbang menghasilkan akurasi yang cukup tinggi yaitu 83,53% pada ruang warna YUV dengan pengujian pada data latih dan akurasi 74,40% dengan data uji pada semua ruang warna.</p><p> </p><p><em><strong>Abstract</strong></em></p><p class="Abstract"><em>Increasing the number of people with diabetes is one of the factors causing the high number of people with diabetic retinopathy. One of the images used by ophthalmologists to identify diabetic retinopathy is a retinal photo. In this research, the identification of diabetic retinopathy is done automatically using retinal fundus images and the Convolutional Neural Network (CNN) algorithm, which is a variation of the Deep Learning algorithm. The obstacle found in the recognition process is the color of the retina which tends to be yellowish red so that the RGB color space does not produce optimal accuracy. Therefore, in this research, various color spaces were tested to get better results. From the results of trials using 1000 images data in the color space of RGB, HSI, YUV and L * a * b * give suboptimal results on balanced data where the best accuracy is still below 50%. However, the unbalanced data gives a fairly high accuracy of 83.53% with training data on the YUV color space and 74,40% with testing data on all color spaces.</em></p><p><em><strong><br /></strong></em></p>


2020 ◽  
Vol 9 (05) ◽  
pp. 25052-25056
Author(s):  
Abhi Kadam ◽  
Anupama Mhatre ◽  
Sayali Redasani ◽  
Amit Nerurkar

Current lighting technologies extend the options for changing the appearance of rooms and closed spaces, as such creating ambiences with an affective meaning. Using intelligence, these ambiences may instantly be adapted to the needs of the room’s occupant(s), possibly improving their well-being. In this paper, we set actuate lighting in our surrounding using mood detection. We analyze the mood of the person by Facial Emotion Recognition using deep learning model such as Convolutional Neural Network (CNN). On recognizing this emotion, we will actuate lighting in our surrounding in accordance with the mood. Based on implementation results, the system needs to be developed further by adding more specific data class and training data.


2020 ◽  
Vol 32 (4) ◽  
pp. 731-737
Author(s):  
Akinari Onishi ◽  
◽  

Brain-computer interface (BCI) enables us to interact with the external world via electroencephalography (EEG) signals. Recently, deep learning methods have been applied to the BCI to reduce the time required for recording training data. However, more evidence is required due to lack of comparison. To reveal more evidence, this study proposed a deep learning method named time-wise convolutional neural network (TWCNN), which was applied to a BCI dataset. In the evaluation, EEG data from a subject was classified utilizing previously recorded EEG data from other subjects. As a result, TWCNN showed the highest accuracy, which was significantly higher than the typically used classifier. The results suggest that the deep learning method may be useful to reduce the recording time of training data.


Khazanah ◽  
2020 ◽  
Vol 12 (2) ◽  
Author(s):  
Xosya Salassa ◽  
◽  
Wais Al Qarni ◽  
Trional Novanza ◽  
Fahmi Guntara Diasa ◽  
...  

Indonesia is an agrarian country whose people mostly work in agriculture by contributing to the 3rd largest GDP. But on the other hand, the main problem in agriculture is the development of pests and diseases of crops. There are cases where there are crops that are attacked by diseases with less obvious symptoms for farmers. For example, in citrus plants that are attacked by CVPD. Initially, the citrus plant does not show too early symptoms of the disease, making it difficult to distinguish from healthy plants. Based on these problems early detection and identification of plant diseases are the main factors to prevent and reduce the spread of plant diseases. The study used deep learning methods with the Convolutional Neural Network (CNN) algorithm model. The dataset used comes from PlantVillage with a total of 20,639 leaf image files that have been classified based on their respective classes. The design of the model architecture is done by designing the CNN model following the DenseNet121 architecture, by changing the parameters to improve the accuracy results. Image size is 64, train shape (20639, 64, 64, 3), epoch value 50,100, and 150. The number of input layers used is 4 layers with shapes (64, 64, 3). Densenet121 shape (1024), global average pooling2D shape (1024), batch normalization 2 (1024), dropout (1024), dense (256), batch normalization 3 (256), root (Dense) (15). This research was conducted with 3 epoch iteration tests to find the best accuracy value. The training data for epoch 50,100, and 150 produces an average model accuracy of 99.38% and the average value of the model loss is 0.019% can also be seen from the testing data results for epoch 50,100, and 150 has an average model of 95.16% and can be seen also from the average value for the loss is 0.20%. Based on the algorithm that applied the resulting training accuracy of 99.58% and the accuracy of testing 96.41% then design this application is useful to accurately detect diseases in plants by using leaf imagery of the plant.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Xieyi Chen ◽  
Dongyun Wang ◽  
Jinjun Shao ◽  
Jun Fan

To automatically detect plastic gasket defects, a set of plastic gasket defect visual detection devices based on GoogLeNet Inception-V2 transfer learning was designed and established in this study. The GoogLeNet Inception-V2 deep convolutional neural network (DCNN) was adopted to extract and classify the defect features of plastic gaskets to solve the problem of their numerous surface defects and difficulty in extracting and classifying the features. Deep learning applications require a large amount of training data to avoid model overfitting, but there are few datasets of plastic gasket defects. To address this issue, data augmentation was applied to our dataset. Finally, the performance of the three convolutional neural networks was comprehensively compared. The results showed that the GoogLeNet Inception-V2 transfer learning model had a better performance in less time. It means it had higher accuracy, reliability, and efficiency on the dataset used in this paper.


2021 ◽  
pp. 147592172110239
Author(s):  
Ranting Cui ◽  
Guillermo Azuara ◽  
Francesco Lanza di Scalea ◽  
Eduardo Barrera

The detection and localization of structural damage in a stiffened skin-to-stringer composite panel typical of modern aircraft construction can be addressed by ultrasonic-guided wave transducer arrays. However, the geometrical and material complexities of this part make it quite difficult to utilize physics-based concepts of wave scattering. A data-driven deep learning (DL) approach based on the convolutional neural network (CNN) is used instead for this application. The DL technique automatically selects the most sensitive wave features based on the learned training data. In addition, the generalization abilities of the network allow for detection of damage that can be different from the training scenarios. This article describes a specific 1D-CNN algorithm that has been designed for this application, and it demonstrates its ability to image damage in key regions of the stiffened composite test panel, particularly the skin region, the stringer’s flange region, and the stringer’s cap region. Covering the stringer’s regions from guided wave transducers located solely on the skin is a particularly attractive feature of the proposed SHM approach for this kind of complex structure.


Computation ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 3
Author(s):  
Sima Sarv Ahrabi ◽  
Michele Scarpiniti ◽  
Enzo Baccarelli ◽  
Alireza Momenzadeh

In parallel with the vast medical research on clinical treatment of COVID-19, an important action to have the disease completely under control is to carefully monitor the patients. What the detection of COVID-19 relies on most is the viral tests, however, the study of X-rays is helpful due to the ease of availability. There are various studies that employ Deep Learning (DL) paradigms, aiming at reinforcing the radiography-based recognition of lung infection by COVID-19. In this regard, we make a comparison of the noteworthy approaches devoted to the binary classification of infected images by using DL techniques, then we also propose a variant of a convolutional neural network (CNN) with optimized parameters, which performs very well on a recent dataset of COVID-19. The proposed model’s effectiveness is demonstrated to be of considerable importance due to its uncomplicated design, in contrast to other presented models. In our approach, we randomly put several images of the utilized dataset aside as a hold out set; the model detects most of the COVID-19 X-rays correctly, with an excellent overall accuracy of 99.8%. In addition, the significance of the results obtained by testing different datasets of diverse characteristics (which, more specifically, are not used in the training process) demonstrates the effectiveness of the proposed approach in terms of an accuracy up to 93%.


Sign in / Sign up

Export Citation Format

Share Document