On line detection of defective apples using computer vision system combined with deep learning methods

2020 ◽  
Vol 286 ◽  
pp. 110102 ◽  
Author(s):  
Shuxiang Fan ◽  
Jiangbo Li ◽  
Yunhe Zhang ◽  
Xi Tian ◽  
Qingyan Wang ◽  
...  
Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 343
Author(s):  
Kim Bjerge ◽  
Jakob Bonde Nielsen ◽  
Martin Videbæk Sepstrup ◽  
Flemming Helsing-Nielsen ◽  
Toke Thomas Høye

Insect monitoring methods are typically very time-consuming and involve substantial investment in species identification following manual trapping in the field. Insect traps are often only serviced weekly, resulting in low temporal resolution of the monitoring data, which hampers the ecological interpretation. This paper presents a portable computer vision system capable of attracting and detecting live insects. More specifically, the paper proposes detection and classification of species by recording images of live individuals attracted to a light trap. An Automated Moth Trap (AMT) with multiple light sources and a camera was designed to attract and monitor live insects during twilight and night hours. A computer vision algorithm referred to as Moth Classification and Counting (MCC), based on deep learning analysis of the captured images, tracked and counted the number of insects and identified moth species. Observations over 48 nights resulted in the capture of more than 250,000 images with an average of 5675 images per night. A customized convolutional neural network was trained on 2000 labeled images of live moths represented by eight different classes, achieving a high validation F1-score of 0.93. The algorithm measured an average classification and tracking F1-score of 0.71 and a tracking detection rate of 0.79. Overall, the proposed computer vision system and algorithm showed promising results as a low-cost solution for non-destructive and automatic monitoring of moths.


2019 ◽  
Vol 2 (1) ◽  
Author(s):  
Serena Yeung ◽  
Francesca Rinaldo ◽  
Jeffrey Jopling ◽  
Bingbin Liu ◽  
Rishab Mehra ◽  
...  

1993 ◽  
Vol 115 (1) ◽  
pp. 37-43 ◽  
Author(s):  
Jong-Jin Park ◽  
A. Galip Ulsoy

An on-line flank wear estimation system, using the integrated method presented in Part 1 of the paper, is implemented in a laboratory environment, and its performance is evaluated through turning experiments. A computer vision system is developed using an image processing algorithm, a commercially available computer vision system, and a microscopic lens. The developed algorithm is based on the difference between the intensity of the reflected light from a flank wear surface and that from the background. The difference is very significant and an appropriate selection of the intensity threshold level yields an acceptable binary image of the flank wear. This image is used by the vision computer for the calculation of the flank wear. The flank wear model parameters that need to be known a priori are determined through several preliminary experiments, or from data available in the literature. Cutting conditions are selected to satisfy the assumptions made on the design of the adaptive observer presented in Part 1. The resulting cutting conditions are typical of those used in finishing cutting operations. The integrated method is tested in turning experiments under both constant and time varying cutting conditions, and yields very accurate on-line estimation of the flank wear development.


2019 ◽  
Vol 8 (2) ◽  
pp. 1746-1750

Segmentation is an important stage in any computer vision system. Segmentation involves discarding the objects which are not of our interest and extracting only the object of our interest. Automated segmentation has become very difficult when we have complex background and other challenges like illumination, occlusion etc. In this project we are designing an automated segmentation system using deep learning algorithm to segment images with complex background.


Author(s):  
M. Senthamil Selvi ◽  
K. Deepa ◽  
N. Saranya ◽  
S. Jansi Rani

Sign in / Sign up

Export Citation Format

Share Document