scholarly journals Classification of Adulterated Particle Images in Coconut Oil Using Deep Learning Approaches

2022 ◽  
Vol 12 (2) ◽  
pp. 656
Author(s):  
Attapon Palananda ◽  
Warangkhana Kimpan

In the production of coconut oil for consumption, cleanliness and safety are the first priorities for meeting the standard in Thailand. The presence of color, sediment, or impurities is an important element that affects consumers’ or buyers’ decision to buy coconut oil. Coconut oil contains impurities that are revealed during the process of compressing the coconut pulp to extract the oil. Therefore, the oil must be filtered by centrifugation and passed through a fine filter. When the oil filtration process is finished, staff inspect the turbidity of coconut oil by examining the color with the naked eye and should detect only the color of the coconut oil. However, this method cannot detect small impurities, suspended particles that take time to settle and become sediment. Studies have shown that the turbidity of coconut oil can be measured by passing light through the oil and applying image processing techniques. This method makes it possible to detect impurities using a microscopic camera that photographs the coconut oil. This study proposes a method for detecting impurities that cause the turbidity in coconut oil using a deep learning approach called a convolutional neural network (CNN) to solve the problem of impurity identification and image analysis. In the experiments, this paper used two coconut oil impurity datasets, PiCO_V1 and PiCO_V2, containing 1000 and 6861 images, respectively. A total of 10 CNN architectures were tested on these two datasets to determine the accuracy of the best architecture. The experimental results indicated that the MobileNetV2 architecture had the best performance, with the highest training accuracy rate, 94.05%, and testing accuracy rate, 80.20%.

2019 ◽  
Vol 9 (7) ◽  
pp. 1385 ◽  
Author(s):  
Luca Donati ◽  
Eleonora Iotti ◽  
Giulio Mordonini ◽  
Andrea Prati

Visual classification of commercial products is a branch of the wider fields of object detection and feature extraction in computer vision, and, in particular, it is an important step in the creative workflow in fashion industries. Automatically classifying garment features makes both designers and data experts aware of their overall production, which is fundamental in order to organize marketing campaigns, avoid duplicates, categorize apparel products for e-commerce purposes, and so on. There are many different techniques for visual classification, ranging from standard image processing to machine learning approaches: this work, made by using and testing the aforementioned approaches in collaboration with Adidas AG™, describes a real-world study aimed at automatically recognizing and classifying logos, stripes, colors, and other features of clothing, solely from final rendering images of their products. Specifically, both deep learning and image processing techniques, such as template matching, were used. The result is a novel system for image recognition and feature extraction that has a high classification accuracy and which is reliable and robust enough to be used by a company like Adidas. This paper shows the main problems and proposed solutions in the development of this system, and the experimental results on the Adidas AG™ dataset.


Skin lesion growth of unwanted cells on the upper most layer of skin. These lesions may conation cancerous cells which may lead to health issues to the patient and in severe cases may lead to patient’s demise. Dermatologists identify type of skin cancer by identifying it in image generated using dermatoscope and procedure known as Dermatoscopy. Previously there have been many studies which show classification of these dermatoscopic images using machine learning and deep learning solutions. Machine learning approaches use image processing techniques for identifying mole in given image and then for classification researchers have used techniques like SVM , random forest etc. With advances in field of deep learning there have been various methods proposed on classification of using CNN which achieves more precision and accuracy. In this paper we are proposing a CNN based approach for image classification with best overall accuracy of 78.08% and good multiclass AUC for all classes in HAM10000 dataset.


Symmetry ◽  
2021 ◽  
Vol 13 (5) ◽  
pp. 750
Author(s):  
Carmelo Militello ◽  
Leonardo Rundo ◽  
Salvatore Vitabile ◽  
Vincenzo Conti

Biometric classification plays a key role in fingerprint characterization, especially in the identification process. In fact, reducing the number of comparisons in biometric recognition systems is essential when dealing with large-scale databases. The classification of fingerprints aims to achieve this target by splitting fingerprints into different categories. The general approach of fingerprint classification requires pre-processing techniques that are usually computationally expensive. Deep Learning is emerging as the leading field that has been successfully applied to many areas, such as image processing. This work shows the performance of pre-trained Convolutional Neural Networks (CNNs), tested on two fingerprint databases—namely, PolyU and NIST—and comparisons to other results presented in the literature in order to establish the type of classification that allows us to obtain the best performance in terms of precision and model efficiency, among approaches under examination, namely: AlexNet, GoogLeNet, and ResNet. We present the first study that extensively compares the most used CNN architectures by classifying the fingerprints into four, five, and eight classes. From the experimental results, the best performance was obtained in the classification of the PolyU database by all the tested CNN architectures due to the higher quality of its samples. To confirm the reliability of our study and the results obtained, a statistical analysis based on the McNemar test was performed.


2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1459 ◽  
Author(s):  
Tamás Czimmermann ◽  
Gastone Ciuti ◽  
Mario Milazzo ◽  
Marcello Chiurazzi ◽  
Stefano Roccella ◽  
...  

This paper reviews automated visual-based defect detection approaches applicable to various materials, such as metals, ceramics and textiles. In the first part of the paper, we present a general taxonomy of the different defects that fall in two classes: visible (e.g., scratches, shape error, etc.) and palpable (e.g., crack, bump, etc.) defects. Then, we describe artificial visual processing techniques that are aimed at understanding of the captured scenery in a mathematical/logical way. We continue with a survey of textural defect detection based on statistical, structural and other approaches. Finally, we report the state of the art for approaching the detection and classification of defects through supervised and non-supervised classifiers and deep learning.


Healthcare ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1579
Author(s):  
Wansuk Choi ◽  
Seoyoon Heo

The purpose of this study was to classify ULTT videos through transfer learning with pre-trained deep learning models and compare the performance of the models. We conducted transfer learning by combining a pre-trained convolution neural network (CNN) model into a Python-produced deep learning process. Videos were processed on YouTube and 103,116 frames converted from video clips were analyzed. In the modeling implementation, the process of importing the required modules, performing the necessary data preprocessing for training, defining the model, compiling, model creation, and model fit were applied in sequence. Comparative models were Xception, InceptionV3, DenseNet201, NASNetMobile, DenseNet121, VGG16, VGG19, and ResNet101, and fine tuning was performed. They were trained in a high-performance computing environment, and validation and loss were measured as comparative indicators of performance. Relatively low validation loss and high validation accuracy were obtained from Xception, InceptionV3, and DenseNet201 models, which is evaluated as an excellent model compared with other models. On the other hand, from VGG16, VGG19, and ResNet101, relatively high validation loss and low validation accuracy were obtained compared with other models. There was a narrow range of difference between the validation accuracy and the validation loss of the Xception, InceptionV3, and DensNet201 models. This study suggests that training applied with transfer learning can classify ULTT videos, and that there is a difference in performance between models.


Cryptography ◽  
2021 ◽  
Vol 5 (4) ◽  
pp. 30
Author(s):  
Bang Yuan Chong ◽  
Iftekhar Salam

This paper studies the use of deep learning (DL) models under a known-plaintext scenario. The goal of the models is to predict the secret key of a cipher using DL techniques. We investigate the DL techniques against different ciphers, namely, Simplified Data Encryption Standard (S-DES), Speck, Simeck and Katan. For S-DES, we examine the classification of the full key set, and the results are better than a random guess. However, we found that it is difficult to apply the same classification model beyond 2-round Speck. We also demonstrate that DL models trained under a known-plaintext scenario can successfully recover the random key of S-DES. However, the same method has been less successful when applied to modern ciphers Speck, Simeck, and Katan. The ciphers Simeck and Katan are further investigated using the DL models but with a text-based key. This application found the linear approximations between the plaintext–ciphertext pairs and the text-based key.


2020 ◽  
Author(s):  
Nicos Maglaveras ◽  
Georgios Petmezas ◽  
Vassilis Kilintzis ◽  
Leandros Stefanopoulos ◽  
Andreas Tzavelis ◽  
...  

BACKGROUND Electrocardiogram (ECG) recording and interpretation is the most common method used for the diagnosis of cardiac arrhythmias, nonetheless this process requires significant expertise and effort from the doctors’ perspective. Automated ECG signal classification could be a useful technique for the accurate detection and classification of several types of arrhythmias within a short timeframe. OBJECTIVE To review current approaches using state-of-the-art CNNs and deep learning methodologies in arrhythmia detection via ECG feature classification techniques and propose an optimised architecture capable of different types of arrhythmia diagnosis using publicly existing annotated arrhythmia databases from the MIT-BIH databases available at PHYSIONET (physionet.org) . METHODS A hybrid CNN-LSTM deep learning model is proposed to classify beats derived from two large ECG databases. The approach is proposed after a systematic review of current AI/DL methods applied in different types of arrhythmia diagnosis using the same public MIT-BIH databases. In the proposed architecture the CNN part carries out feature extraction and dimensionality reduction, and the LSTM part performs classification of the encoded ECG beat signals. RESULTS In experimental studies conducted with the MIT-BIH Arrhythmia and the MIT-BIH Atrial Fibrillation Databases average accuracies of 96.82% and 96.65% were noted respectively. CONCLUSIONS The proposed system can be used for arrhythmia diagnosis in clinical and mHealth applications managing a number of prevalent arrhythmias such as VT, AFIB, LBBB etc. The capability of CNNs to reduce the ECG beat signal’s size and extract its main features can be effectively combined with the LSTMs’ capability to learn the temporal dynamics of the input data for the accurate and automatic recognition of several types of cardiac arrhythmias. CLINICALTRIAL Not applicable.


Sign in / Sign up

Export Citation Format

Share Document