Oversampling Based on Data Augmentation in Convolutional Neural Network for Silicon Wafer Defect Classification

Author(s):  
Uzma Batool ◽  
Mohd Ibrahim Shapiai ◽  
Nordinah Ismail ◽  
Hilman Fauzi ◽  
Syahrizal Salleh

Silicon wafer defect data collected from fabrication facilities is intrinsically imbalanced because of the variable frequencies of defect types. Frequently occurring types will have more influence on the classification predictions if a model gets trained on such skewed data. A fair classifier for such imbalanced data requires a mechanism to deal with type imbalance in order to avoid biased results. This study has proposed a convolutional neural network for wafer map defect classification, employing oversampling as an imbalance addressing technique. To have an equal participation of all classes in the classifier’s training, data augmentation has been employed, generating more samples in minor classes. The proposed deep learning method has been evaluated on a real wafer map defect dataset and its classification results on the test set returned a 97.91% accuracy. The results were compared with another deep learning based auto-encoder model demonstrating the proposed method, a potential approach for silicon wafer defect classification that needs to be investigated further for its robustness.

2020 ◽  
Vol 32 (4) ◽  
pp. 731-737
Author(s):  
Akinari Onishi ◽  
◽  

Brain-computer interface (BCI) enables us to interact with the external world via electroencephalography (EEG) signals. Recently, deep learning methods have been applied to the BCI to reduce the time required for recording training data. However, more evidence is required due to lack of comparison. To reveal more evidence, this study proposed a deep learning method named time-wise convolutional neural network (TWCNN), which was applied to a BCI dataset. In the evaluation, EEG data from a subject was classified utilizing previously recorded EEG data from other subjects. As a result, TWCNN showed the highest accuracy, which was significantly higher than the typically used classifier. The results suggest that the deep learning method may be useful to reduce the recording time of training data.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Xieyi Chen ◽  
Dongyun Wang ◽  
Jinjun Shao ◽  
Jun Fan

To automatically detect plastic gasket defects, a set of plastic gasket defect visual detection devices based on GoogLeNet Inception-V2 transfer learning was designed and established in this study. The GoogLeNet Inception-V2 deep convolutional neural network (DCNN) was adopted to extract and classify the defect features of plastic gaskets to solve the problem of their numerous surface defects and difficulty in extracting and classifying the features. Deep learning applications require a large amount of training data to avoid model overfitting, but there are few datasets of plastic gasket defects. To address this issue, data augmentation was applied to our dataset. Finally, the performance of the three convolutional neural networks was comprehensively compared. The results showed that the GoogLeNet Inception-V2 transfer learning model had a better performance in less time. It means it had higher accuracy, reliability, and efficiency on the dataset used in this paper.


2021 ◽  
Vol 905 (1) ◽  
pp. 012018
Author(s):  
I Y Prayogi ◽  
Sandra ◽  
Y Hendrawan

Abstract The objective of this study is to classify the quality of dried clove flowers using deep learning method with Convolutional Neural Network (CNN) algorithm, and also to perform the sensitivity analysis of CNN hyperparameters to obtain best model for clove quality classification process. The quality of clove as raw material in this study was determined according to SNI 3392-1994 by PT. Perkebunan Nusantara XII Pancusari Plantation, Malang, East Java, Indonesia. In total 1,600 images of dried clove flower were divided into 4 qualities. Each clove quality has 225 training data, 75 validation data, and 100 test data. The first step of this study is to build CNN model architecture as first model. The result of that model gives 65.25% reading accuracy. The second step is to analyze CNN sensitivity or CNN hyperparameter on the first model. The best value of CNN hyperparameter in each step then to be used in the next stage. Finally, after CNN hyperparameter carried out the reading accuracy of the test data is improved to 87.75%.


2020 ◽  
Vol 8 (11) ◽  
pp. 924
Author(s):  
Guan Wei Thum ◽  
Sai Hong Tang ◽  
Siti Azfanizam Ahmad ◽  
Moath Alrifaey

Underwater cables or pipelines are commonly utilized elements in ocean research, marine engineering, power transmission, and communication-based activities. Their performance necessitates regularly conducted inspection for maintenance purposes. A vision system is commonly used by autonomous underwater vehicles (AUVs) to track and search for underwater cable. Its traditional methods are characteristically applicable in AUVs, wherein they are equipped with handcrafted features and shallow trainable architectures. However, such methods are subpar or even incapable of tracking underwater cable in fast-changing and complex underwater conditions. In contrast to this, the deep learning method is linked with the capacity to learn semantic, high-level, and deeper features, thus rendering it recommended for performing underwater cable tracking. In this study, several deep Convolutional Neural Network (CNN) models were proposed to classify underwater cable images obtained from a set of underwater images, whereby transfer learning and data augmentation were applied to enhance the classification accuracy. Following a comparison and discussion regarding the performance of these models, MobileNetV2 outperformed among other models and yielded lower computational time and the highest accuracy for classifying underwater cable images at 93.5%. Hence, the main contribution of this study is geared toward developing a deep learning method for underwater cable image classification.


2021 ◽  
Vol 11 (15) ◽  
pp. 7148
Author(s):  
Bedada Endale ◽  
Abera Tullu ◽  
Hayoung Shi ◽  
Beom-Soo Kang

Unmanned aerial vehicles (UAVs) are being widely utilized for various missions: in both civilian and military sectors. Many of these missions demand UAVs to acquire artificial intelligence about the environments they are navigating in. This perception can be realized by training a computing machine to classify objects in the environment. One of the well known machine training approaches is supervised deep learning, which enables a machine to classify objects. However, supervised deep learning comes with huge sacrifice in terms of time and computational resources. Collecting big input data, pre-training processes, such as labeling training data, and the need for a high performance computer for training are some of the challenges that supervised deep learning poses. To address these setbacks, this study proposes mission specific input data augmentation techniques and the design of light-weight deep neural network architecture that is capable of real-time object classification. Semi-direct visual odometry (SVO) data of augmented images are used to train the network for object classification. Ten classes of 10,000 different images in each class were used as input data where 80% were for training the network and the remaining 20% were used for network validation. For the optimization of the designed deep neural network, a sequential gradient descent algorithm was implemented. This algorithm has the advantage of handling redundancy in the data more efficiently than other algorithms.


2021 ◽  
Vol 13 (19) ◽  
pp. 3859
Author(s):  
Joby M. Prince Czarnecki ◽  
Sathishkumar Samiappan ◽  
Meilun Zhou ◽  
Cary Daniel McCraine ◽  
Louis L. Wasson

The radiometric quality of remotely sensed imagery is crucial for precision agriculture applications because estimations of plant health rely on the underlying quality. Sky conditions, and specifically shadowing from clouds, are critical determinants in the quality of images that can be obtained from low-altitude sensing platforms. In this work, we first compare common deep learning approaches to classify sky conditions with regard to cloud shadows in agricultural fields using a visible spectrum camera. We then develop an artificial-intelligence-based edge computing system to fully automate the classification process. Training data consisting of 100 oblique angle images of the sky were provided to a convolutional neural network and two deep residual neural networks (ResNet18 and ResNet34) to facilitate learning two classes, namely (1) good image quality expected, and (2) degraded image quality expected. The expectation of quality stemmed from the sky condition (i.e., density, coverage, and thickness of clouds) present at the time of the image capture. These networks were tested using a set of 13,000 images. Our results demonstrated that ResNet18 and ResNet34 classifiers produced better classification accuracy when compared to a convolutional neural network classifier. The best overall accuracy was obtained by ResNet34, which was 92% accurate, with a Kappa statistic of 0.77. These results demonstrate a low-cost solution to quality control for future autonomous farming systems that will operate without human intervention and supervision.


2022 ◽  
pp. 1559-1575
Author(s):  
Mário Pereira Véstias

Machine learning is the study of algorithms and models for computing systems to do tasks based on pattern identification and inference. When it is difficult or infeasible to develop an algorithm to do a particular task, machine learning algorithms can provide an output based on previous training data. A well-known machine learning model is deep learning. The most recent deep learning models are based on artificial neural networks (ANN). There exist several types of artificial neural networks including the feedforward neural network, the Kohonen self-organizing neural network, the recurrent neural network, the convolutional neural network, the modular neural network, among others. This article focuses on convolutional neural networks with a description of the model, the training and inference processes and its applicability. It will also give an overview of the most used CNN models and what to expect from the next generation of CNN models.


Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 597 ◽  
Author(s):  
Joshua Dickey ◽  
Brett Borghetti ◽  
William Junek

The detection of seismic events at regional and teleseismic distances is critical to Nuclear Treaty Monitoring. Traditionally, detecting regional and teleseismic events has required the use of an expensive multi-instrument seismic array; however in this work, we present DeepPick, a novel seismic detection algorithm capable of array-like detection performance from a single-trace. We achieve this performance through three novel steps: First, a high-fidelity dataset is constructed by pairing array-beam catalog arrival-times with single-trace waveforms from the reference instrument of the array. Second, an idealized characteristic function is created, with exponential peaks aligned to the cataloged arrival times. Third, a deep temporal convolutional neural network is employed to learn the complex non-linear filters required to transform the single-trace waveforms into corresponding idealized characteristic functions. The training data consists of all arrivals in the International Seismological Centre Database for seven seismic arrays over a five year window from 1 January 2010 to 1 January 2015, yielding a total training set of 608,362 detections. The test set consists of the same seven arrays over a one year window from 1 January 2015 to 1 January 2016. We report our results by training the algorithm on six of the arrays and testing it on the seventh, so as to demonstrate the generalization and transportability of the technique to new stations. Detection performance against this test set is outstanding, yielding significant improvements in recall over existing techniques. Fixing a type-I error rate of 0.001, the algorithm achieves an overall recall (true positive rate) of 56% against the 141,095 array-beam arrivals in the test set, yielding 78,802 correct detections. This is more than twice the 37,572 detections made by an STA/LTA detector over the same period, and represents a 35% improvement over the 58,515 detections made by a state-of-the-art kurtosis-based detector. Furthermore, DeepPick provides at least a 4 dB improvement in detector sensitivity across the board, and is more computationally efficient, with run-times an order of magnitude faster than either of the other techniques tested. These results demonstrate the potential of our algorithm to significantly enhance the effectiveness of the global treaty monitoring network.


Sign in / Sign up

Export Citation Format

Share Document