scholarly journals U-Net-Based Foreign Object Detection Method Using Effective Image Acquisition System: A Case of Almond and Green Onion Flake Food Process

2021 ◽  
Vol 13 (24) ◽  
pp. 13834
Author(s):  
Guk-Jin Son ◽  
Dong-Hoon Kwak ◽  
Mi-Kyung Park ◽  
Young-Duk Kim ◽  
Hee-Chul Jung

Supervised deep learning-based foreign object detection algorithms are tedious, costly, and time-consuming because they usually require a large number of training datasets and annotations. These disadvantages make them frequently unsuitable for food quality evaluation and food manufacturing processes. However, the deep learning-based foreign object detection algorithm is an effective method to overcome the disadvantages of conventional foreign object detection methods mainly used in food inspection. For example, color sorter machines cannot detect foreign objects with a color similar to food, and the performance is easily degraded by changes in illuminance. Therefore, to detect foreign objects, we use a deep learning-based foreign object detection algorithm (model). In this paper, we present a synthetic method to efficiently acquire a training dataset of deep learning that can be used for food quality evaluation and food manufacturing processes. Moreover, we perform data augmentation using color jitter on a synthetic dataset and show that this approach significantly improves the illumination invariance features of the model trained on synthetic datasets. The F1-score of the model that trained the synthetic dataset of almonds at 360 lux illumination intensity achieved a performance of 0.82, similar to the F1-score of the model that trained the real dataset. Moreover, the F1-score of the model trained with the real dataset combined with the synthetic dataset achieved better performance than the model trained with the real dataset in the change of illumination. In addition, compared with the traditional method of using color sorter machines to detect foreign objects, the model trained on the synthetic dataset has obvious advantages in accuracy and efficiency. These results indicate that the synthetic dataset not only competes with the real dataset, but they also complement each other.

Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5279
Author(s):  
Dong-Hoon Kwak ◽  
Guk-Jin Son ◽  
Mi-Kyung Park ◽  
Young-Duk Kim

The consumption of seaweed is increasing year by year worldwide. Therefore, the foreign object inspection of seaweed is becoming increasingly important. Seaweed is mixed with various materials such as laver and sargassum fusiforme. So it has various colors even in the same seaweed. In addition, the surface is uneven and greasy, causing diffuse reflections frequently. For these reasons, it is difficult to detect foreign objects in seaweed, so the accuracy of conventional foreign object detectors used in real manufacturing sites is less than 80%. Supporting real-time inspection should also be considered when inspecting foreign objects. Since seaweed requires mass production, rapid inspection is essential. However, hyperspectral imaging techniques are generally not suitable for high-speed inspection. In this study, we overcome this limitation by using dimensionality reduction and using simplified operations. For accuracy improvement, the proposed algorithm is carried out in 2 stages. Firstly, the subtraction method is used to clearly distinguish seaweed and conveyor belts, and also detect some relatively easy to detect foreign objects. Secondly, a standardization inspection is performed based on the result of the subtraction method. During this process, the proposed scheme adopts simplified and burdenless calculations such as subtraction, division, and one-by-one matching, which achieves both accuracy and low latency performance. In the experiment to evaluate the performance, 60 normal seaweeds and 60 seaweeds containing foreign objects were used, and the accuracy of the proposed algorithm is 95%. Finally, by implementing the proposed algorithm as a foreign object detection platform, it was confirmed that real-time operation in rapid inspection was possible, and the possibility of deployment in real manufacturing sites was confirmed.


2021 ◽  
Vol 7 (7) ◽  
pp. 104
Author(s):  
Vladyslav Andriiashen ◽  
Robert van Liere ◽  
Tristan van Leeuwen ◽  
Kees Joost Batenburg

X-ray imaging is a widely used technique for non-destructive inspection of agricultural food products. One application of X-ray imaging is the autonomous, in-line detection of foreign objects in food samples. Examples of such inclusions are bone fragments in meat products, plastic and metal debris in fish, and fruit infestations. This article presents a processing methodology for unsupervised foreign object detection based on dual-energy X-ray absorptiometry (DEXA). A novel thickness correction model is introduced as a pre-processing technique for DEXA data. The aim of the model is to homogenize regions in the image that belong to the food product and to enhance contrast where the foreign object is present. In this way, the segmentation of the foreign object is more robust to noise and lack of contrast. The proposed methodology was applied to a dataset of 488 samples of meat products acquired from a conveyor belt. Approximately 60% of the samples contain foreign objects of different types and sizes, while the rest of the samples are void of foreign objects. The results show that samples without foreign objects are correctly identified in 97% of cases and that the overall accuracy of foreign object detection reaches 95%.


2020 ◽  
Vol 10 (14) ◽  
pp. 4744
Author(s):  
Hyukzae Lee ◽  
Jonghee Kim ◽  
Chanho Jung ◽  
Yongchan Park ◽  
Woong Park ◽  
...  

The arena fragmentation test (AFT) is one of the tests used to design an effective warhead. Conventionally, complex and expensive measuring equipment is used for testing a warhead and measuring important factors such as the size, velocity, and the spatial distribution of fragments where the fragments penetrate steel target plates. In this paper, instead of using specific sensors and equipment, we proposed the use of a deep learning-based object detection algorithm to detect fragments in the AFT. To this end, we acquired many high-speed videos and built an AFT image dataset with bounding boxes of warhead fragments. Our method fine-tuned an existing object detection network named the Faster R-convolutional neural network (CNN) on this dataset with modification of the network’s anchor boxes. We also employed a novel temporal filtering method, which was demonstrated as an effective non-fragment filtering scheme in our recent previous image processing-based fragment detection approach, to capture only the first penetrating fragments from all detected fragments. We showed that the performance of the proposed method was comparable to that of a sensor-based system under the same experimental conditions. We also demonstrated that the use of deep learning technologies in the task of AFT significantly enhanced the performance via a quantitative comparison between our proposed method and our recent previous image processing-based method. In other words, our proposed method outperformed the previous image processing-based method. The proposed method produced outstanding results in terms of finding the exact fragment positions.


Sensors ◽  
2018 ◽  
Vol 18 (8) ◽  
pp. 2484 ◽  
Author(s):  
Weixing Zhang ◽  
Chandi Witharana ◽  
Weidong Li ◽  
Chuanrong Zhang ◽  
Xiaojiang Li ◽  
...  

Traditional methods of detecting and mapping utility poles are inefficient and costly because of the demand for visual interpretation with quality data sources or intense field inspection. The advent of deep learning for object detection provides an opportunity for detecting utility poles from side-view optical images. In this study, we proposed using a deep learning-based method for automatically mapping roadside utility poles with crossarms (UPCs) from Google Street View (GSV) images. The method combines the state-of-the-art DL object detection algorithm (i.e., the RetinaNet object detection algorithm) and a modified brute-force-based line-of-bearing (LOB, a LOB stands for the ray towards the location of the target [UPC at here] from the original location of the sensor [GSV mobile platform]) measurement method to estimate the locations of detected roadside UPCs from GSV. Experimental results indicate that: (1) both the average precision (AP) and the overall accuracy (OA) are around 0.78 when the intersection-over-union (IoU) threshold is greater than 0.3, based on the testing of 500 GSV images with a total number of 937 objects; and (2) around 2.6%, 47%, and 79% of estimated locations of utility poles are within 1 m, 5 m, and 10 m buffer zones, respectively, around the referenced locations of utility poles. In general, this study indicates that even in a complex background, most utility poles can be detected with the use of DL, and the LOB measurement method can estimate the locations of most UPCs.


2021 ◽  
Vol 1995 (1) ◽  
pp. 012046
Author(s):  
Meian Li ◽  
Haojie Zhu ◽  
Hao Chen ◽  
Lixia Xue ◽  
Tian Gao

2020 ◽  
Vol 28 (S2) ◽  
Author(s):  
Asmida Ismail ◽  
Siti Anom Ahmad ◽  
Azura Che Soh ◽  
Mohd Khair Hassan ◽  
Hazreen Haizi Harith

The object detection system is a computer technology related to image processing and computer vision that detects instances of semantic objects of a certain class in digital images and videos. The system consists of two main processes, which are classification and detection. Once an object instance has been classified and detected, it is possible to obtain further information, including recognizes the specific instance, track the object over an image sequence and extract further information about the object and the scene. This paper presented an analysis performance of deep learning object detector by combining a deep learning Convolutional Neural Network (CNN) for object classification and applies classic object detection algorithms to devise our own deep learning object detector. MiniVGGNet is an architecture network used to train an object classification, and the data used for this purpose was collected from specific indoor environment building. For object detection, sliding windows and image pyramids were used to localize and detect objects at different locations, and non-maxima suppression (NMS) was used to obtain the final bounding box to localize the object location. Based on the experiment result, the percentage of classification accuracy of the network is 80% to 90% and the time for the system to detect the object is less than 15sec/frame. Experimental results show that there are reasonable and efficient to combine classic object detection method with a deep learning classification approach. The performance of this method can work in some specific use cases and effectively solving the problem of the inaccurate classification and detection of typical features.


2021 ◽  
Vol 163 (1) ◽  
pp. 23
Author(s):  
Kaiming Cui ◽  
Junjie Liu ◽  
Fabo Feng ◽  
Jifeng Liu

Abstract Deep learning techniques have been well explored in the transiting exoplanet field; however, previous work mainly focuses on classification and inspection. In this work, we develop a novel detection algorithm based on a well-proven object detection framework in the computer vision field. Through training the network on the light curves of the confirmed Kepler exoplanets, our model yields about 90% precision and recall for identifying transits with signal-to-noise ratio higher than 6 (set the confidence threshold to 0.6). Giving a slightly lower confidence threshold, recall can reach higher than 95%. We also transfer the trained model to the TESS data and obtain similar performance. The results of our algorithm match the intuition of the human visual perception and make it useful to find single-transiting candidates. Moreover, the parameters of the output bounding boxes can also help to find multiplanet systems. Our network and detection functions are implemented in the Deep-Transit toolkit, which is an open-source Python package hosted on Github and PyPI.


Sign in / Sign up

Export Citation Format

Share Document