Detection of solder paste defects with an optimization‐based deep learning model using image processing techniques

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Ali Sezer ◽  
Aytaç Altan

Purpose In the production processes of electronic devices, production activities are interrupted due to the problems caused by soldering defects during the assembly of surface-mounted elements on printed circuit boards (PCBs), and this leads to an increase in production costs. In solder paste applications, defects that may occur in electronic cards are usually noticed at the last stage of the production process. This situation reduces the efficiency of production and causes delays in the delivery schedule of critical systems. This study aims to overcome these problems, optimization based deep learning model has been proposed by using 2D signal processing methods. Design/methodology/approach An optimization-based deep learning model is proposed by using image-processing techniques to detect solder paste defects on PCBs with high performance at an early stage. Convolutional neural network, one of the deep learning methods, is trained using the data set obtained for this study, and pad regions on PCB are classified. Findings A total of six types of classes used in the study consist of uncorrectable soldering, missing soldering, excess soldering, short circuit, undefined object and correct soldering, which are frequently used in the literature. The validity of the model has been tested on the data set consisting of 648 test data. Originality/value The effect of image processing and optimization methods on model performance is examined. With the help of the proposed model, defective solder paste areas on PCBs are detected, and these regions are visualized by taking them into a frame.

2020 ◽  
Vol 8 (6) ◽  
pp. 5730-5737

Digital Image Processing is application of computer algorithms to process, manipulate and interpret images. As a field it is playing an increasingly important role in many aspects of people’s daily life. Even though Image Processing has accomplished a great deal on its own, nowadays researches are being conducted in using it with Deep Learning (which is part of a broader family, Machine Learning) to achieve better performance in detecting and classifying objects in an image. Car’s License Plate Recognition is one of the hottest research topics in the domain of Image Processing (Computer Vision). It is having wide range of applications since license number is the primary and mandatory identifier of motor vehicles. When it comes to license plates in Ethiopia, they have unique features like Amharic characters, differing dimensions and plate formats. Although there is a research conducted on ELPR, it was attempted using the conventional image processing techniques but never with deep learning. In this proposed research an attempt is going to be made in tackling the problem of ELPR with deep learning and image processing. Tensorflow is going to be used in building the deep learning model and all the image processing is going to be done with OpenCV-Python. So, at the end of this research a deep learning model that recognizes Ethiopian license plates with better accuracy is going to be built.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Veerraju Gampala ◽  
Praful Vijay Nandankar ◽  
M. Kathiravan ◽  
S. Karunakaran ◽  
Arun Reddy Nalla ◽  
...  

Purpose The purpose of this paper is to analyze and build a deep learning model that can furnish statistics of COVID-19 and is able to forecast pandemic outbreak using Kaggle open research COVID-19 data set. As COVID-19 has an up-to-date data collection from the government, deep learning techniques can be used to predict future outbreak of coronavirus. The existing long short-term memory (LSTM) model is fine-tuned to forecast the outbreak of COVID-19 with better accuracy, and an empirical data exploration with advanced picturing has been made to comprehend the outbreak of coronavirus. Design/methodology/approach This research work presents a fine-tuned LSTM deep learning model using three hidden layers, 200 LSTM unit cells, one activation function ReLu, Adam optimizer, loss function is mean square error, the number of epochs 200 and finally one dense layer to predict one value each time. Findings LSTM is found to be more effective in forecasting future predictions. Hence, fine-tuned LSTM model predicts accurate results when applied to COVID-19 data set. Originality/value The fine-tuned LSTM model is developed and tested for the first time on COVID-19 data set to forecast outbreak of pandemic according to the authors’ knowledge.


2012 ◽  
Author(s):  
A. Robert Weiß ◽  
Uwe Adomeit ◽  
Philippe Chevalier ◽  
Stéphane Landeau ◽  
Piet Bijl ◽  
...  

Author(s):  
Ahmet Kayabasi ◽  
Kadir Sabanci ◽  
Abdurrahim Toktas

In this study, an image processing techniques (IPTs) and a Sugeno-typed neuro-fuzzy system (NFS) model is presented for classifying the wheat grains into bread and durum. Images of 200 wheat grains are taken by a high resolution camera in order to generate the data set for training and testing processes of the NFS model. The features of 5 dimensions which are length, width, area, perimeter and fullness are acquired through using IPT. Then NFS model input with the dimension parameters are trained through 180 wheat grain data and their accuracies are tested via 20 data. The proposed NFS model numerically calculate the outputs with mean absolute error (MAE) of 0.0312 and classify the grains with accuracy of 100% for the testing process. These results show that the IPT based NFS model can be successfully applied to classification of wheat grains.


Cancers ◽  
2021 ◽  
Vol 14 (1) ◽  
pp. 12
Author(s):  
Jose M. Castillo T. ◽  
Muhammad Arif ◽  
Martijn P. A. Starmans ◽  
Wiro J. Niessen ◽  
Chris H. Bangma ◽  
...  

The computer-aided analysis of prostate multiparametric MRI (mpMRI) could improve significant-prostate-cancer (PCa) detection. Various deep-learning- and radiomics-based methods for significant-PCa segmentation or classification have been reported in the literature. To be able to assess the generalizability of the performance of these methods, using various external data sets is crucial. While both deep-learning and radiomics approaches have been compared based on the same data set of one center, the comparison of the performances of both approaches on various data sets from different centers and different scanners is lacking. The goal of this study was to compare the performance of a deep-learning model with the performance of a radiomics model for the significant-PCa diagnosis of the cohorts of various patients. We included the data from two consecutive patient cohorts from our own center (n = 371 patients), and two external sets of which one was a publicly available patient cohort (n = 195 patients) and the other contained data from patients from two hospitals (n = 79 patients). Using multiparametric MRI (mpMRI), the radiologist tumor delineations and pathology reports were collected for all patients. During training, one of our patient cohorts (n = 271 patients) was used for both the deep-learning- and radiomics-model development, and the three remaining cohorts (n = 374 patients) were kept as unseen test sets. The performances of the models were assessed in terms of their area under the receiver-operating-characteristic curve (AUC). Whereas the internal cross-validation showed a higher AUC for the deep-learning approach, the radiomics model obtained AUCs of 0.88, 0.91 and 0.65 on the independent test sets compared to AUCs of 0.70, 0.73 and 0.44 for the deep-learning model. Our radiomics model that was based on delineated regions resulted in a more accurate tool for significant-PCa classification in the three unseen test sets when compared to a fully automated deep-learning model.


2020 ◽  
Vol 39 (10) ◽  
pp. 734-741
Author(s):  
Sébastien Guillon ◽  
Frédéric Joncour ◽  
Pierre-Emmanuel Barrallon ◽  
Laurent Castanié

We propose new metrics to measure the performance of a deep learning model applied to seismic interpretation tasks such as fault and horizon extraction. Faults and horizons are thin geologic boundaries (1 pixel thick on the image) for which a small prediction error could lead to inappropriately large variations in common metrics (precision, recall, and intersection over union). Through two examples, we show how classical metrics could fail to indicate the true quality of fault or horizon extraction. Measuring the accuracy of reconstruction of thin objects or boundaries requires introducing a tolerance distance between ground truth and prediction images to manage the uncertainties inherent in their delineation. We therefore adapt our metrics by introducing a tolerance function and illustrate their ability to manage uncertainties in seismic interpretation. We compare classical and new metrics through different examples and demonstrate the robustness of our metrics. Finally, we show on a 3D West African data set how our metrics are used to tune an optimal deep learning model.


2020 ◽  
Vol 27 (8) ◽  
pp. 1891-1912
Author(s):  
Hengqin Wu ◽  
Geoffrey Shen ◽  
Xue Lin ◽  
Minglei Li ◽  
Boyu Zhang ◽  
...  

PurposeThis study proposes an approach to solve the fundamental problem in using query-based methods (i.e. searching engines and patent retrieval tools) to screen patents of information and communication technology in construction (ICTC). The fundamental problem is that ICTC incorporates various techniques and thus cannot be simply represented by man-made queries. To investigate this concern, this study develops a binary classifier by utilizing deep learning and NLP techniques to automatically identify whether a patent is relevant to ICTC, thus accurately screening a corpus of ICTC patents.Design/methodology/approachThis study employs NLP techniques to convert the textual data of patents into numerical vectors. Then, a supervised deep learning model is developed to learn the relations between the input vectors and outputs.FindingsThe validation results indicate that (1) the proposed approach has a better performance in screening ICTC patents than traditional machine learning methods; (2) besides the United States Patent and Trademark Office (USPTO) that provides structured and well-written patents, the approach could also accurately screen patents form Derwent Innovations Index (DIX), in which patents are written in different genres.Practical implicationsThis study contributes a specific collection for ICTC patents, which is not provided by the patent offices.Social implicationsThe proposed approach contributes an alternative manner in gathering a corpus of patents for domains like ICTC that neither exists as a searchable classification in patent offices, nor is accurately represented by man-made queries.Originality/valueA deep learning model with two layers of neurons is developed to learn the non-linear relations between the input features and outputs providing better performance than traditional machine learning models. This study uses advanced NLP techniques lemmatization and part-of-speech POS to process textual data of ICTC patents. This study contributes specific collection for ICTC patents which is not provided by the patent offices.


Sign in / Sign up

Export Citation Format

Share Document