scholarly journals Detection and Classification of Canned Packaging Defects Using Convolutional Neural Network

Author(s):  
Rindi Kusumawardani ◽  
Putu Dana Karningsih

Packaging is one of the important aspects of a product’s identity. The good and adorable packaging can increase product competitiveness because it gives a perception to the customers of good quality products. Therefore, a good packaging display is necessary so that packaging quality inspection is very important. Automated defect detection can help to reduce human error in the inspection process. Convolutional Neural Network (CNN) is an approach that can be used to detect and classify a packaging condition. This paper presents an experiment that compares 5 network models, i.e. ShuffleNet, GoogLeNet, ResNet18, ResNet50, and Resnet101, each network given the same parameters. The dataset is an image of cans packaging which is divided into 3 classifications, No Defect, Minor Defect, and Major Defect. The experimental result shows that network architecture models of ResNet50 and ResNet101 provided the best result for cans defect classification than the other network models, with 95,56% for testing accuracy. The five models have the testing accuracy above 90%, so it can be concluded that all network models are ideal for detecting the packaging defect and defect classification for the cans product.

Author(s):  
A. S. M. Shafi ◽  
Mohammad Motiur Rahman

Gastrointestinal cancer is one of the leading causes of death across the world. The gastrointestinal polyps are considered as the precursors of developing this malignant cancer. In order to condense the probability of cancer, early detection and removal of colorectal polyps can be cogitated. The most used diagnostic modality for colorectal polyps is video endoscopy. But the accuracy of diagnosis mostly depends on doctors' experience that is crucial to detect polyps in many cases. Computer-aided polyp detection is promising to reduce the miss detection rate of the polyp and thus improve the accuracy of diagnosis results. The proposed method first detects polyp and non-polyp then illustrates an automatic polyp classification technique from endoscopic video through color wavelet with higher-order statistical texture feature and Convolutional Neural Network (CNN). Gray Level Run Length Matrix (GLRLM) is used for higher-order statistical texture features of different directions (Ɵ = 0o, 45o, 90o, 135o). The features are fed into a linear support vector machine (SVM) to train the classifier. The experimental result demonstrates that the proposed approach is auspicious and operative with residual network architecture, which triumphs the best performance of accuracy, sensitivity, and specificity of 98.83%, 97.87%, and 99.13% respectively for classification of colorectal polyps on standard public endoscopic video databases.


2020 ◽  
Vol 43 (12) ◽  
Author(s):  
Sriram K. Vidyarthi ◽  
Samrendra K. Singh ◽  
Rakhee Tiwari ◽  
Hong‐Wei Xiao ◽  
Rewa Rai

2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Yinjie Xie ◽  
Wenxin Dai ◽  
Zhenxin Hu ◽  
Yijing Liu ◽  
Chuan Li ◽  
...  

Among many improved convolutional neural network (CNN) architectures in the optical image classification, only a few were applied in synthetic aperture radar (SAR) automatic target recognition (ATR). One main reason is that direct transfer of these advanced architectures for the optical images to the SAR images easily yields overfitting due to its limited data set and less features relative to the optical images. Thus, based on the characteristics of the SAR image, we proposed a novel deep convolutional neural network architecture named umbrella. Its framework consists of two alternate CNN-layer blocks. One block is a fusion of six 3-layer paths, which is used to extract diverse level features from different convolution layers. The other block is composed of convolution layers and pooling layers are mainly utilized to reduce dimensions and extract hierarchical feature information. The combination of the two blocks could extract rich features from different spatial scale and simultaneously alleviate overfitting. The performance of the umbrella model was validated by the Moving and Stationary Target Acquisition and Recognition (MSTAR) benchmark data set. This architecture could achieve higher than 99% accuracy for the classification of 10-class targets and higher than 96% accuracy for the classification of 8 variants of the T72 tank, even in the case of diverse positions located by targets. The accuracy of our umbrella is superior to the current networks applied in the classification of MSTAR. The result shows that the umbrella architecture possesses a very robust generalization capability and will be potential for SAR-ART.


2018 ◽  
Vol 339 ◽  
pp. 615-624 ◽  
Author(s):  
Shaohua Chen ◽  
Laurent A. Baumes ◽  
Aytekin Gel ◽  
Manogna Adepu ◽  
Heather Emady ◽  
...  

Stroke ◽  
2020 ◽  
Vol 51 (Suppl_1) ◽  
Author(s):  
Yichuan Liu ◽  
Brandon L Hancock ◽  
Tri Hoang ◽  
Mark R Etherton ◽  
Steven J Mocking ◽  
...  

Background: Fundamental advances in stroke care will require pooling imaging phenotype data from multiple centers, to complement the current aggregation of genomic, environmental, and clinical information. Sharing clinically acquired MRI data from multiple hospitals is challenging due to inherent heterogeneity of clinical data, where the same MRI series may be labeled differently depending on vendor and hospital. Furthermore, the de-identification process may remove data describing the MRI series, requiring human review. However, manually annotating the MRI series is not only laborious and slow but prone to human error. In this work, we present a recurrent convolutional neural network (RCNN) for automated classification of the MRI series. Methods: We randomly selected 1000 subjects from the MRI-GENetics Interface Exploration study and partitioned them into 800 training, 100 validation and 100 testing subjects. We categorized the MRI series into 24 groups (see Table). The RCNN used a modified AlexNet to extract features from 2D slices. AlexNet was pretrained on ImageNet photographs. Since clinical MRI are 3D and 4D, a gated recurrent unit neural network was used to aggregate information from multiple 2D slices to make the final prediction. Results: We achieved a classification accuracy (correct/total cases) of 99.8%, 98.5% and 97.5% on the training, validation and testing set, respectively. The averaged F1-score (percent overlap between predicted cases and actual cases) over all categories were 99.8% 98.2% and 94.4% on the training, validation and testing set. Conclusion: We showed that automated annotation of MRI series by repurposing deep-learning techniques used for photographic image recognition tasks is feasible. Such methods can be used to facilitate high throughput curation of MRI data acquired across multiple centers and enable scientifically productive collaboration by researchers and, ultimately enhancing big data stroke research.


Sign in / Sign up

Export Citation Format

Share Document