scholarly journals Detection and Classification of Defective Hard Candies Based on Image Processing and Convolutional Neural Networks

Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 2017
Author(s):  
Jinya Wang ◽  
Zhenye Li ◽  
Qihang Chen ◽  
Kun Ding ◽  
Tingting Zhu ◽  
...  

Defective hard candies are usually produced due to inadequate feeding or insufficient cooling during the candy production process. The human-based inspection strategy needs to be brought up to date with the rapid developments in the confectionery industry. In this paper, a detection and classification method for defective hard candies based on convolutional neural networks (CNNs) is proposed. First, the threshold_li method is used to distinguish between hard candy and background. Second, a segmentation algorithm based on concave point detection and ellipse fitting is used to split the adhesive hard candies. Finally, a classification model based on CNNs is constructed for defective hard candies. According to the types of defective hard candies, 2552 hard candies samples were collected; 70% were used for model training, 15% were used for validation, and 15% were used for testing. Defective hard candy classification models based on CNNs (Alexnet, Googlenet, VGG16, Resnet-18, Resnet34, Resnet50, MobileNetV2, and MnasNet0_5) were constructed and tested. The results show that the classification performances of these deep learning models are similar except MnasNet0_5 with the classification accuracy of 84.28%, and the Resnet50-based classification model is the best (98.71%). This research has certain theoretical reference significance for the intelligent classification of granular products.


2019 ◽  
Vol 8 (4) ◽  
pp. 160 ◽  
Author(s):  
Bingxin Liu ◽  
Ying Li ◽  
Guannan Li ◽  
Anling Liu

Spectral characteristics play an important role in the classification of oil film, but the presence of too many bands can lead to information redundancy and reduced classification accuracy. In this study, a classification model that combines spectral indices-based band selection (SIs) and one-dimensional convolutional neural networks was proposed to realize automatic oil films classification using hyperspectral remote sensing images. Additionally, for comparison, the minimum Redundancy Maximum Relevance (mRMR) was tested for reducing the number of bands. The support vector machine (SVM), random forest (RF), and Hu’s convolutional neural networks (CNN) were trained and tested. The results show that the accuracy of classifications through the one dimensional convolutional neural network (1D CNN) models surpassed the accuracy of other machine learning algorithms such as SVM and RF. The model of SIs+1D CNN could produce a relatively higher accuracy oil film distribution map within less time than other models.



2020 ◽  
Vol 10 (7) ◽  
pp. 1707-1713 ◽  
Author(s):  
Mingang Chen ◽  
Wenjie Chen ◽  
Wei Chen ◽  
Lizhi Cai ◽  
Gang Chai

Skin cancers are one of the most common cancers in the world. Early detections and treatments of skin cancers can greatly improve the survival rates of patients. In this paper, a skin lesions classification system is developed with deep convolutional neural networks of ResNet50, which may help dermatologists to recognize skin cancers earlier. We utilize the ResNet50 as a pre-trained model. Then, by transfer learning, it is trained on our skin lesions dataset. Image preprocessing and dataset balancing methods are used to increase the accuracy of the classification model. In classification of skin diseases, our model achieves an overall accuracy of 83.74% on nine-class skin lesions. The experimental results show an impressive effect of the ResNet50 model in finegrained skin lesions classification and skin cancers recognition.



2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.







2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Adam Goodwin ◽  
Sanket Padmanabhan ◽  
Sanchit Hira ◽  
Margaret Glancey ◽  
Monet Slinowsky ◽  
...  

AbstractWith over 3500 mosquito species described, accurate species identification of the few implicated in disease transmission is critical to mosquito borne disease mitigation. Yet this task is hindered by limited global taxonomic expertise and specimen damage consistent across common capture methods. Convolutional neural networks (CNNs) are promising with limited sets of species, but image database requirements restrict practical implementation. Using an image database of 2696 specimens from 67 mosquito species, we address the practical open-set problem with a detection algorithm for novel species. Closed-set classification of 16 known species achieved 97.04 ± 0.87% accuracy independently, and 89.07 ± 5.58% when cascaded with novelty detection. Closed-set classification of 39 species produces a macro F1-score of 86.07 ± 1.81%. This demonstrates an accurate, scalable, and practical computer vision solution to identify wild-caught mosquitoes for implementation in biosurveillance and targeted vector control programs, without the need for extensive image database development for each new target region.



Sign in / Sign up

Export Citation Format

Share Document