Integration of an IIoT Platform with A Deep Learning Based Computer Vision System for Seedling Quality Control Automation

Author(s):  
Erick Fiestas S. ◽  
Paulo Linares O. ◽  
Jorge Alva A. ◽  
Sixto Prado G.
Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 343
Author(s):  
Kim Bjerge ◽  
Jakob Bonde Nielsen ◽  
Martin Videbæk Sepstrup ◽  
Flemming Helsing-Nielsen ◽  
Toke Thomas Høye

Insect monitoring methods are typically very time-consuming and involve substantial investment in species identification following manual trapping in the field. Insect traps are often only serviced weekly, resulting in low temporal resolution of the monitoring data, which hampers the ecological interpretation. This paper presents a portable computer vision system capable of attracting and detecting live insects. More specifically, the paper proposes detection and classification of species by recording images of live individuals attracted to a light trap. An Automated Moth Trap (AMT) with multiple light sources and a camera was designed to attract and monitor live insects during twilight and night hours. A computer vision algorithm referred to as Moth Classification and Counting (MCC), based on deep learning analysis of the captured images, tracked and counted the number of insects and identified moth species. Observations over 48 nights resulted in the capture of more than 250,000 images with an average of 5675 images per night. A customized convolutional neural network was trained on 2000 labeled images of live moths represented by eight different classes, achieving a high validation F1-score of 0.93. The algorithm measured an average classification and tracking F1-score of 0.71 and a tracking detection rate of 0.79. Overall, the proposed computer vision system and algorithm showed promising results as a low-cost solution for non-destructive and automatic monitoring of moths.


Author(s):  
Kartik Gupta ◽  
Cindy Grimm ◽  
Burak Sencer ◽  
Ravi Balasubramanian

Abstract This paper presents a computer vision system for evaluating the quality of deburring and edge breaking on aluminum and steel blocks. This technique produces both quantitative (size) and qualitative (quality) measures of chamfering operation from images taken with an off-the-shelf camera. We demonstrate that the proposed computer vision system can detect edge chamfering geometry within a 1–2mm range. The proposed technique does not require precise calibration of the camera to the part nor specialized hardware beyond a macro lens. Off-the-shelf components and a CAD model of the original part geometry are used for calibration. We also demonstrate the effectiveness of the proposed technique on edge breaking quality control.


2019 ◽  
Vol 2 (1) ◽  
Author(s):  
Serena Yeung ◽  
Francesca Rinaldo ◽  
Jeffrey Jopling ◽  
Bingbin Liu ◽  
Rishab Mehra ◽  
...  

2019 ◽  
Vol 8 (2) ◽  
pp. 1746-1750

Segmentation is an important stage in any computer vision system. Segmentation involves discarding the objects which are not of our interest and extracting only the object of our interest. Automated segmentation has become very difficult when we have complex background and other challenges like illumination, occlusion etc. In this project we are designing an automated segmentation system using deep learning algorithm to segment images with complex background.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Núria Banús ◽  
Imma Boada ◽  
Pau Xiberta ◽  
Pol Toldrà ◽  
Narcís Bustins

AbstractQuality control is a key process designed to ensure that only products satisfying the defined quality requirements reach the end consumer or the next step in a production line. In the food industry, in the packaging step, there are many products that are still evaluated by human operators. To automate the process and improve efficiency and effectiveness, computer vision and artificial intelligence techniques can be applied. This automation is challenging since specific strategies designed according to the application scenario are required. Focusing on the quality control of the sealing and closure of matrix-shaped thermoforming food packages, the aim of the article is to propose a deep-learning-based solution designed to automatically perform the quality control while satisfying production cadence and ensuring 100% inline inspection of the products. Particularly, the designed computer vision system and the image-based criteria defined to determine when a product has to be accepted or rejected are presented. In addition, the vision control software is described with special emphasis on the different convolutional neural network (CNN) architectures that have been considered (ResNet18, ResNet50, Vgg19 and DenseNet161, non-pre-trained and pre-trained on ImageNet) and on the specifically designed dataset. To test the solution, different experiments are carried out in the laboratory and also in a real scenario, concluding that the proposed CNN-based approach improves the efficiency and security of the quality control process. Optimal results are obtained with the pre-trained DenseNet161, achieving false positive rates that range from 0.03 to 0.30% and false negative rates that range from 0 to 0.07%, with a rejection rate between 0.64 and 5.09% of production, and being able to detect at least 99.93% of the sealing defects that occur in any production. The modular design of our solution as well as the provided description allow it to adapt to similar scenarios and to new deep-learning models to prevent the arrival of faulty products to end consumers by removing them from the automated production line.


Author(s):  
M. Senthamil Selvi ◽  
K. Deepa ◽  
N. Saranya ◽  
S. Jansi Rani

Sign in / Sign up

Export Citation Format

Share Document