scholarly journals Product Inspection Methodology via Deep Learning: An Overview

Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5039
Author(s):  
Tae-Hyun Kim ◽  
Hye-Rin Kim ◽  
Yeong-Jun Cho

In this study, we present a framework for product quality inspection based on deep learning techniques. First, we categorize several deep learning models that can be applied to product inspection systems. In addition, we explain the steps for building a deep-learning-based inspection system in detail. Second, we address connection schemes that efficiently link deep learning models to product inspection systems. Finally, we propose an effective method that can maintain and enhance a product inspection system according to improvement goals of the existing product inspection systems. The proposed system is observed to possess good system maintenance and stability owing to the proposed methods. All the proposed methods are integrated into a unified framework and we provide detailed explanations of each proposed method. In order to verify the effectiveness of the proposed system, we compare and analyze the performance of the methods in various test scenarios. We expect that our study will provide useful guidelines to readers who desire to implement deep-learning-based systems for product inspection.

2021 ◽  
Vol 22 (15) ◽  
pp. 7911
Author(s):  
Eugene Lin ◽  
Chieh-Hsin Lin ◽  
Hsien-Yuan Lane

A growing body of evidence currently proposes that deep learning approaches can serve as an essential cornerstone for the diagnosis and prediction of Alzheimer’s disease (AD). In light of the latest advancements in neuroimaging and genomics, numerous deep learning models are being exploited to distinguish AD from normal controls and/or to distinguish AD from mild cognitive impairment in recent research studies. In this review, we focus on the latest developments for AD prediction using deep learning techniques in cooperation with the principles of neuroimaging and genomics. First, we narrate various investigations that make use of deep learning algorithms to establish AD prediction using genomics or neuroimaging data. Particularly, we delineate relevant integrative neuroimaging genomics investigations that leverage deep learning methods to forecast AD on the basis of incorporating both neuroimaging and genomics data. Moreover, we outline the limitations as regards to the recent AD investigations of deep learning with neuroimaging and genomics. Finally, we depict a discussion of challenges and directions for future research. The main novelty of this work is that we summarize the major points of these investigations and scrutinize the similarities and differences among these investigations.


2021 ◽  
Vol 108 (Supplement_3) ◽  
Author(s):  
L F Sánchez Peralta ◽  
J F Ortega Morán ◽  
Cr L Saratxaga ◽  
J B Pagador ◽  
A Picón ◽  
...  

Abstract INTRODUCTION Deep learning techniques have significantly contributed to the field of medical imaging analysis. In case of colorectal cancer, they have shown a great utility for increasing the adenoma detection rate at colonoscopy, but a common validation methodology is still missing. In this study, we present preliminary efforts towards the definition of a validation framework. MATERIAL AND METHODS Different models based on different backbones and encoder-decoder architectures have been trained with a publicly available dataset that contains white light and NBI colonoscopy videos, with 76 different lesions from colonoscopy procedures in 48 human patients. A computer aided detection (CADe) demonstrator has been implemented to show the performance of the models. RESULTS This CADe demonstrator shows the areas detected as polyp by overlapping the predicted mask on the endoscopic image. It allows selecting the video to be used, among those from the test set. Although it only present basic features such as play, pause and moving to the next video, it easily loads the model and allows for visualization of results. The demonstrator is accompanied by a set of metrics to be used depending on the aimed task: polyp detection, localization and segmentation. CONCLUSIONS The use of this CADe demonstrator, together with a publicly available dataset and predefined metrics will allow for an easier and more fair comparison of methods. Further work is still required to validate the proposed framework.


Agronomy ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 646
Author(s):  
Bini Darwin ◽  
Pamela Dharmaraj ◽  
Shajin Prince ◽  
Daniela Elena Popescu ◽  
Duraisamy Jude Hemanth

Precision agriculture is a crucial way to achieve greater yields by utilizing the natural deposits in a diverse environment. The yield of a crop may vary from year to year depending on the variations in climate, soil parameters and fertilizers used. Automation in the agricultural industry moderates the usage of resources and can increase the quality of food in the post-pandemic world. Agricultural robots have been developed for crop seeding, monitoring, weed control, pest management and harvesting. Physical counting of fruitlets, flowers or fruits at various phases of growth is labour intensive as well as an expensive procedure for crop yield estimation. Remote sensing technologies offer accuracy and reliability in crop yield prediction and estimation. The automation in image analysis with computer vision and deep learning models provides precise field and yield maps. In this review, it has been observed that the application of deep learning techniques has provided a better accuracy for smart farming. The crops taken for the study are fruits such as grapes, apples, citrus, tomatoes and vegetables such as sugarcane, corn, soybean, cucumber, maize, wheat. The research works which are carried out in this research paper are available as products for applications such as robot harvesting, weed detection and pest infestation. The methods which made use of conventional deep learning techniques have provided an average accuracy of 92.51%. This paper elucidates the diverse automation approaches for crop yield detection techniques with virtual analysis and classifier approaches. Technical hitches in the deep learning techniques have progressed with limitations and future investigations are also surveyed. This work highlights the machine vision and deep learning models which need to be explored for improving automated precision farming expressly during this pandemic.


Author(s):  
Vu Tuan Hai ◽  
Dang Thanh Vu ◽  
Huynh Ho Thi Mong Trinh ◽  
Pham The Bao

Recent advances in deep learning models have shown promising potential in object removal, which is the task of replacing undesired objects with appropriate pixel values using known context. Object removal-based deep learning can commonly be solved by modeling it as the Img2Img (image to image) translation or Inpainting. Instead of dealing with a large context, this paper aims at a specific application of object removal, that is, erasing braces trace out of an image having teeth with braces (called braces2teeth problem). We solved the problem by three methods corresponding to different datasets. Firstly, we use the CycleGAN model to deal with the problem that paired training data is not available. In the second case, we try to create pseudo-paired data to train the Pix2Pix model. In the last case, we utilize GraphCut combining generative inpainting model to build a user-interactive tool that can improve the result in case the user is not satisfied with previous results. To our best knowledge, this study is one of the first attempts to take the braces2teeth problem into account by using deep learning techniques and it can be applied in various fields, from health care to entertainment.


2021 ◽  
Author(s):  
Ramy Abdallah ◽  
Clare E. Bond ◽  
Robert W.H. Butler

<p>Machine learning is being presented as a new solution for a wide range of geoscience problems. Primarily machine learning has been used for 3D seismic data processing, seismic facies analysis and well log data correlation. The rapid development in technology with open-source artificial intelligence libraries and the accessibility of affordable computer graphics processing units (GPU) makes the application of machine learning in geosciences increasingly tractable. However, the application of artificial intelligence in structural interpretation workflows of subsurface datasets is still ambiguous. This study aims to use machine learning techniques to classify images of folds and fold-thrust structures. Here we show that convolutional neural networks (CNNs) as supervised deep learning techniques provide excellent algorithms to discriminate between geological image datasets. Four different datasets of images have been used to train and test the machine learning models. These four datasets are a seismic character dataset with five classes (faults, folds, salt, flat layers and basement), folds types with three classes (buckle, chevron and conjugate), fault types with three classes (normal, reverse and thrust) and fold-thrust geometries with three classes (fault bend fold, fault propagation fold and detachment fold). These image datasets are used to investigate three machine learning models. One Feedforward linear neural network model and two convolutional neural networks models (Convolution 2d layer transforms sequential model and Residual block model (ResNet with 9, 34, and 50 layers)). Validation and testing datasets forms a critical part of testing the model’s performance accuracy. The ResNet model records the highest performance accuracy score, of the machine learning models tested. Our CNN image classification model analysis provides a framework for applying machine learning to increase structural interpretation efficiency, and shows that CNN classification models can be applied effectively to geoscience problems. The study provides a starting point to apply unsupervised machine learning approaches to sub-surface structural interpretation workflows.</p>


2019 ◽  
Author(s):  
Ismael Araujo ◽  
Juan Gamboa ◽  
Adenilton Silva

To recognize patterns that are usually imperceptible by human beings has been one of the main advantages of using machine learning algorithms The use of Deep Learning techniques has been promising to the classification problems, especially the ones related to image classification. The classification of gases detected by an artificial nose is one other area where Deep Learning techniques can be used to seek classification improvements. Succeeding in a classification task can result in many advantages to quality control, as well as to preventing accidents. In this work, it is presented some Deep Learning models specifically created to the task of gas classification.


2021 ◽  
Vol 40 ◽  
pp. 03030
Author(s):  
Mehdi Surani ◽  
Ramchandra Mangrulkar

Over the past years the exponential growth of social media usage has given the power to every individual to share their opinions freely. This has led to numerous threats allowing users to exploit their freedom of speech, thus spreading hateful comments, using abusive language, carrying out personal attacks, and sometimes even to the extent of cyberbullying. However, determining abusive content is not a difficult task and many social media platforms have solutions available already but at the same time, many are searching for more efficient ways and solutions to overcome this issue. Traditional models explore machine learning models to identify negative content posted on social media. Shaming categories are explored, and content is put in place according to the label. Such categorization is easy to detect as the contextual language used is direct. However, the use of irony to mock or convey contempt is also a part of public shaming and must be considered while categorizing the shaming labels. In this research paper, various shaming types, namely toxic, severe toxic, obscene, threat, insult, identity hate, and sarcasm are predicted using deep learning approaches like CNN and LSTM. These models have been studied along with traditional models to determine which model gives the most accurate results.


2020 ◽  
Author(s):  
Vruddhi Shah ◽  
Rinkal Keniya ◽  
Akanksha Shridharani ◽  
Manav Punjabi ◽  
Jainam Shah ◽  
...  

Early diagnosis of the coronavirus disease in 2019 (COVID-19) is essential for controlling this pandemic. COVID-19 has been spreading rapidly all over the world. There is no vaccine available for this virus yet. Fast and accurate COVID-19 screening is possible using computed tomography (CT) scan images. The deep learning techniques used in the proposed method was based on a convolutional neural network (CNN). Our manuscript focuses on differentiating the CT scan images of COVID-19 and non-COVID 19 CT using different deep learning techniques. A self developed model named CTnet-10 was designed for the COVID-19 diagnosis, having an accuracy of 82.1 %. Also, other models that we tested are DenseNet-169, VGG-16, ResNet-50, InceptionV3, and VGG-19. The VGG-19 proved to be superior with an accuracy of 94.52 % as compared to all other deep learning models. Automated diagnosis of COVID-19 from the CT scan pictures can be used by the doctors as a quick and efficient method for COVID-19 screening.


2020 ◽  
Vol 12 (10) ◽  
pp. 1581 ◽  
Author(s):  
Daniel Perez ◽  
Kazi Islam ◽  
Victoria Hill ◽  
Richard Zimmerman ◽  
Blake Schaeffer ◽  
...  

Coastal ecosystems are critically affected by seagrass, both economically and ecologically. However, reliable seagrass distribution information is lacking in nearly all parts of the world because of the excessive costs associated with its assessment. In this paper, we develop two deep learning models for automatic seagrass distribution quantification based on 8-band satellite imagery. Specifically, we implemented a deep capsule network (DCN) and a deep convolutional neural network (CNN) to assess seagrass distribution through regression. The DCN model first determines whether seagrass is presented in the image through classification. Second, if seagrass is presented in the image, it quantifies the seagrass through regression. During training, the regression and classification modules are jointly optimized to achieve end-to-end learning. The CNN model is strictly trained for regression in seagrass and non-seagrass patches. In addition, we propose a transfer learning approach to transfer knowledge in the trained deep models at one location to perform seagrass quantification at a different location. We evaluate the proposed methods in three WorldView-2 satellite images taken from the coastal area in Florida. Experimental results show that the proposed deep DCN and CNN models performed similarly and achieved much better results than a linear regression model and a support vector machine. We also demonstrate that using transfer learning techniques for the quantification of seagrass significantly improved the results as compared to directly applying the deep models to new locations.


Author(s):  
Chigozie Nwankpa ◽  
Solomon Eze ◽  
Winifred Ijomah ◽  
Anthony Gachagan ◽  
Stephen Marshall

Abstract Deep learning has emerged as a state-of-the-art learning technique across a wide range of applications, including image recognition, object detection and localisation, natural language processing, prediction and forecasting systems. With significant applicability, deep learning could be used in new and broader areas of applications, including remanufacturing. Remanufacturing is a process of taking used products through disassembly, inspection, cleaning, reconditioning, reassembly and testing to ascertain that their condition meets new products conditions with warranty. This process is complex and requires a good understanding of the respective stages for proper analysis. Inspection is a critical process in remanufacturing, which guarantees the quality of the remanufactured products. It is currently an expensive manual operation in the remanufacturing process that depends on operator expertise, in most cases. This research investigates the application of deep learning algorithms to inspection in remanufacturing, towards automating the inspection process. This paper presents a novel vision-based inspection system based on deep convolution neural network (DCNN) for eight types of defects, namely pitting, rust, cracks and other combination faults. The materials used for this feasibility study were 100 cm × 150 cm mild steel plate material, purchased locally, and captured using a USB webcam of 0.3 megapixels. The performance of this preliminary study indicates that the DCNN can classify with up to 100% accuracy on validation data and above 96% accuracy on a live video feed, by using 80% of the sample dataset for training and the remaining 20% for testing. Therefore, in the remanufacturing parts inspection, the DCNN approach has high potential as a method that could surpass the current technologies used in the design of inspection systems. This research is the first to apply deep learning techniques in remanufacturing inspection. The proposed method offers the potential to eliminate expert judgement in inspection, save cost, increase throughput and improve precision. This preliminary study demonstrates that deep learning techniques have the potential to revolutionise inspection in remanufacturing. This research offers valuable insight into these opportunities, serving as a starting point for future applications of deep learning algorithms to remanufacturing.


Sign in / Sign up

Export Citation Format

Share Document