scholarly journals Automated System for Grading Apples using Convolutional Neural Network

Fruit grading is a process that affect quality control and fruit-processing industries to meet the efficiency of its production and society. However, these industries have suffered from lack of standards in quality control, higher time of grading and low product output because of the use of manual methods. To meet the increasing demand of quality fruit products, fruit-processing industries must consider automating their fruit grading process. Several algorithms have been proposed over the years to achieve this purpose and their works were based on color, shape and inability to handle large dataset which resulted in slow recognition accuracy. To mitigate these flaws, we develop an automated system for grading and classification of apple using Convolutional Neural Network (CNN) used in image recognition and classification. Two models were developed from CNN using ResNet50 as its convolutional base, a process called transfer learning. The first model, the apple checker model (ACM) performs the recognition of the image with two output connections (apple and non-apple) while the apple grader model (AGM) does the classification of the image that has four output classes (spoiled, grade A, grade B & grade C) if the image is an apple. A comparison evaluation of both models were conducted and experimental results show that the ACM achieved a test accuracy of 100% while the AGM obtained recognition rate of 99.89%.The developed system may be employed in food processing industries and related life applications.

2020 ◽  
Vol 10 (2) ◽  
pp. 84 ◽  
Author(s):  
Atif Mehmood ◽  
Muazzam Maqsood ◽  
Muzaffar Bashir ◽  
Yang Shuyuan

Alzheimer’s disease (AD) may cause damage to the memory cells permanently, which results in the form of dementia. The diagnosis of Alzheimer’s disease at an early stage is a problematic task for researchers. For this, machine learning and deep convolutional neural network (CNN) based approaches are readily available to solve various problems related to brain image data analysis. In clinical research, magnetic resonance imaging (MRI) is used to diagnose AD. For accurate classification of dementia stages, we need highly discriminative features obtained from MRI images. Recently advanced deep CNN-based models successfully proved their accuracy. However, due to a smaller number of image samples available in the datasets, there exist problems of over-fitting hindering the performance of deep learning approaches. In this research, we developed a Siamese convolutional neural network (SCNN) model inspired by VGG-16 (also called Oxford Net) to classify dementia stages. In our approach, we extend the insufficient and imbalanced data by using augmentation approaches. Experiments are performed on a publicly available dataset open access series of imaging studies (OASIS), by using the proposed approach, an excellent test accuracy of 99.05% is achieved for the classification of dementia stages. We compared our model with the state-of-the-art models and discovered that the proposed model outperformed the state-of-the-art models in terms of performance, efficiency, and accuracy.


2019 ◽  
Vol 2019 ◽  
pp. 1-17 ◽  
Author(s):  
Sufian A. Badawi ◽  
Muhammad Moazam Fraz

The arterioles and venules (AV) classification of retinal vasculature is considered as the first step in the development of an automated system for analysing the vasculature biomarker association with disease prognosis. Most of the existing AV classification methods depend on the accurate segmentation of retinal blood vessels. Moreover, the unavailability of large-scale annotated data is a major hindrance in the application of deep learning techniques for AV classification. This paper presents an encoder-decoder based fully convolutional neural network for classification of retinal vasculature into arterioles and venules, without requiring the preliminary step of vessel segmentation. An optimized multiloss function is used to learn the pixel-wise and segment-wise retinal vessel labels. The proposed method is trained and evaluated on DRIVE, AVRDB, and a newly created AV classification dataset; and it attains 96%, 98%, and 97% accuracy, respectively. The new AV classification dataset is comprised of 700 annotated retinal images, which will offer the researchers a benchmark to compare their AV classification results.


2021 ◽  
Author(s):  
Daniel J Delbarre ◽  
Luis Santos ◽  
Habib Ganjgahi ◽  
Neil Horner ◽  
Aaron McCoy ◽  
...  

Large scale neuroimaging datasets present unique challenges for automated processing pipelines. Motivated by a large-scale clinical trials dataset of Multiple Sclerosis (MS) with over 235,000 magnetic resonance imaging (MRI) scans, we consider the challenge of defacing - anonymisation to remove identifying features on the face and the ears. The defacing process must undergo quality control (QC) checks to ensure that the facial features have been adequately anonymised and that the brain tissue is left completely intact. Visual QC checks - particularly on a project of this scale - are time-consuming and can cause delays in preparing data for research. In this study, we have developed a convolutional neural network (CNN) that can assist with the QC of MRI defacing. Our CNN is able to distinguish between scans that are correctly defaced, and three sub-types of failures with high test accuracy (77\%). Through applying visualisation techniques, we are able to verify that the CNN uses the same anatomical features as human scorers when selecting classifications. Due to the sensitive nature of the data, strict thresholds are applied so that only classifications with high confidence are accepted, and scans that are passed by the CNN undergo a time-efficient verification check. Integration of the network into the anonymisation pipeline has led to nearly half of all scans being classified by the CNN, resulting in a considerable reduction in the amount of time needed for manual QC checks, while maintaining high QC standards to protect patient identities.


SinkrOn ◽  
2020 ◽  
Vol 5 (2) ◽  
pp. 199-207
Author(s):  
Mawaddah Harahap ◽  
Jefferson Jefferson ◽  
Surya Barti ◽  
Suprianto Samosir ◽  
Christi Andika Turnip

Malaria is a disease caused by plasmodium which attacks red blood cells. Diagnosis of malaria can be made by examining the patient's red blood cells using a microscope. Convolutional Neural Network (CNN) is a deep learning method that is growing rapidly. CNN is often used in image classification. The CNN process usually requires considerable resources. This is one of the weaknesses of CNN. In this study, the CNN architecture used in the classification of red blood cell images is LeNet-5 and DRNet. The data used is a segmented image of red blood cells and is secondary data. Before conducting the data training, data pre-processing and data augmentation from the dataset was carried out. The number of layers of the LeNet-5 and DRNet models were 4 and 7. The test accuracy of the LeNet-5 and DrNet models was 95% and 97.3%, respectively. From the test results, it was found that the LeNet-5 model was more suitable in terms of red blood cell classification. By using the LeNet-5 architecture, the resources used to perform classification can be reduced compared to previous studies where the accuracy obtained is also the same because the number of layers is less, which is only 4 layers


2021 ◽  
Vol 6 (1) ◽  
pp. 90
Author(s):  
Muhammad Fathur Prayuda

The human face has various functions, especially in expressing something. The expression shown has a unique shape so that it can recognize the atmosphere of the feeling that is being felt. The appearance of a feeling is usually caused by emotion. Research on the classification of emotions has been carried out using various methods. For this study, a Convolutional Neural Network (CNN) method was used which serves as a classifier for sad and depressive emotions. The CNN method has the advantage of preprocessing convolution so that it can extract a hidden feature in an image. The dataset used in this study came from the Facial expression dataset image folders (fer2013) where the dataset used for classification was taken with a ratio of 60% training and 40% validation with the results of the trained model of 60% total loss and 68% test accuracy.


2020 ◽  
Vol 2020 (4) ◽  
pp. 4-14
Author(s):  
Vladimir Budak ◽  
Ekaterina Ilyina

The article proposes the classification of lenses with different symmetrical beam angles and offers a scale as a spot-light’s palette. A collection of spotlight’s images was created and classified according to the proposed scale. The analysis of 788 pcs of existing lenses and reflectors with different LEDs and COBs carried out, and the dependence of the axial light intensity from beam angle was obtained. A transfer training of new deep convolutional neural network (CNN) based on the pre-trained GoogleNet was performed using this collection. GradCAM analysis showed that the trained network correctly identifies the features of objects. This work allows us to classify arbitrary spotlights with an accuracy of about 80 %. Thus, light designer can determine the class of spotlight and corresponding type of lens with its technical parameters using this new model based on CCN.


Author(s):  
P.L. Nikolaev

This article deals with method of binary classification of images with small text on them Classification is based on the fact that the text can have 2 directions – it can be positioned horizontally and read from left to right or it can be turned 180 degrees so the image must be rotated to read the sign. This type of text can be found on the covers of a variety of books, so in case of recognizing the covers, it is necessary first to determine the direction of the text before we will directly recognize it. The article suggests the development of a deep neural network for determination of the text position in the context of book covers recognizing. The results of training and testing of a convolutional neural network on synthetic data as well as the examples of the network functioning on the real data are presented.


2020 ◽  
Vol 14 ◽  
Author(s):  
Lahari Tipirneni ◽  
Rizwan Patan

Abstract:: Millions of deaths all over the world are caused by breast cancer every year. It has become the most common type of cancer in women. Early detection will help in better prognosis and increases the chance of survival. Automating the classification using Computer-Aided Diagnosis (CAD) systems can make the diagnosis less prone to errors. Multi class classification and Binary classification of breast cancer is a challenging problem. Convolutional neural network architectures extract specific feature descriptors from images, which cannot represent different types of breast cancer. This leads to false positives in classification, which is undesirable in disease diagnosis. The current paper presents an ensemble Convolutional neural network for multi class classification and Binary classification of breast cancer. The feature descriptors from each network are combined to produce the final classification. In this paper, histopathological images are taken from publicly available BreakHis dataset and classified between 8 classes. The proposed ensemble model can perform better when compared to the methods proposed in the literature. The results showed that the proposed model could be a viable approach for breast cancer classification.


Sign in / Sign up

Export Citation Format

Share Document