scholarly journals An Automated Tomato Maturity Grading System Using Transfer Learning Based AlexNet

2021 ◽  
Vol 26 (2) ◽  
pp. 191-200
Author(s):  
Prasenjit Das ◽  
Jay Kant Pratap Singh Yadav ◽  
Arun Kumar Yadav

Tomato maturity classification is the process that classifies the tomatoes based on their maturity by its life cycle. It is green in color when it starts to grow; at its pre-ripening stage, it is Yellow, and when it is ripened, its color is Red. Thus, a tomato maturity classification task can be performed based on the color of tomatoes. Conventional skill-based methods cannot fulfill modern manufacturing management's precise selection criteria in the agriculture sector since they are time-consuming and have poor accuracy. The automatic feature extraction behavior of deep learning networks is most efficient in image classification and recognition tasks. Hence, this paper outlines an automated grading system for tomato maturity classification in terms of colors (Red, Green, Yellow) using the pre-trained network, namely 'AlexNet,' based on Transfer Learning. This study aims to formulate a low-cost solution with the best performance and accuracy for Tomato Maturity Grading. The results are gathered in terms of Accuracy, Loss curves, and confusion matrix. The results showed that the proposed model outperforms the other deep learning and the machine learning (ML) techniques used by researchers for tomato classification tasks in the last few years, obtaining 100% accuracy.

F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 1057
Author(s):  
Muhammad Nurmahir Mohamad Sehmi ◽  
Mohammad Faizal Ahmad Fauzi ◽  
Wan Siti Halimatul Munirah Wan Ahmad ◽  
Elaine Wan Ling Chan

Background: Pancreatic cancer is one of the deadliest forms of cancer. The cancer grades define how aggressively the cancer will spread and give indication for doctors to make proper prognosis and treatment. The current method of pancreatic cancer grading, by means of manual examination of the cancerous tissue following a biopsy, is time consuming and often results in misdiagnosis and thus incorrect treatment. This paper presents an automated grading system for pancreatic cancer from pathology images developed by comparing deep learning models on two different pathological stains. Methods: A transfer-learning technique was adopted by testing the method on 14 different ImageNet pre-trained models. The models were fine-tuned to be trained with our dataset. Results: From the experiment, DenseNet models appeared to be the best at classifying the validation set with up to 95.61% accuracy in grading pancreatic cancer despite the small sample set. Conclusions: To the best of our knowledge, this is the first work in grading pancreatic cancer based on pathology images. Previous works have either focused only on detection (benign or malignant), or on radiology images (computerized tomography [CT], magnetic resonance imaging [MRI] etc.). The proposed system can be very useful to pathologists in facilitating an automated or semi-automated cancer grading system, which can address the problems found in manual grading.


2021 ◽  
Author(s):  
Ghassan Mohammed Halawani

The main purpose of this project is to modify a convolutional neural network for image classification, based on a deep-learning framework. A transfer learning technique is used by the MATLAB interface to Alex-Net to train and modify the parameters in the last two fully connected layers of Alex-Net with a new dataset to perform classifications of thousands of images. First, the general common architecture of most neural networks and their benefits are presented. The mathematical models and the role of each part in the neural network are explained in detail. Second, different neural networks are studied in terms of architecture, application, and the working method to highlight the strengths and weaknesses of each of neural network. The final part conducts a detailed study on one of the most powerful deep-learning networks in image classification – i.e. the convolutional neural network – and how it can be modified to suit different classification tasks by using transfer learning technique in MATLAB.


2021 ◽  
Vol 30 (1) ◽  
pp. 1-18
Author(s):  
Yusuf Hendrawan ◽  
Shinta Widyaningtyas ◽  
Muchammad Riza Fauzy ◽  
Sucipto Sucipto ◽  
Retno Damayanti ◽  
...  

Luwak coffee (palm civet coffee) is known as one of the most expensive coffee in the world. In order to lower production costs, Indonesian producers and retailers often mix high-priced Luwak coffee with regular coffee green beans. However, the absence of tools and methods to classify Luwak coffee counterfeiting makes the sensing method’s development urgent. The research aimed to detect and classify Luwak coffee green beans purity into the following purity categories, very low (0-25%), low (25-50%), medium (50-75%), and high (75-100%). The classifying method relied on a low-cost commercial visible light camera and the deep learning model method. Then, the research also compared the performance of four pre-trained convolutional neural network (CNN) models consisting of SqueezeNet, GoogLeNet, ResNet-50, and AlexNet. At the same time, the sensitivity analysis was performed by setting the CNN parameters such as optimization technique (SGDm, Adam, RMSProp) and the initial learning rate (0.00005 and 0.0001). The training and validation result obtained the GoogLeNet as the best CNN model with optimizer type Adam and learning rate 0.0001, which resulted in 89.65% accuracy. Furthermore, the testing process using confusion matrix from different sample data obtained the best CNN model using ResNet-50 with optimizer type RMSProp and learning rate 0.0001, providing an accuracy average of up to 85.00%. Later, the CNN model can be used to establish a real-time, non-destructive, rapid, and precise purity detection system.


Author(s):  
Xia Yu ◽  
Tao Yang ◽  
Jingyi Lu ◽  
Yun Shen ◽  
Wei Lu ◽  
...  

AbstractBlood glucose (BG) prediction is an effective approach to avoid hyper- and hypoglycemia, and achieve intelligent glucose management for patients with type 1 or serious type 2 diabetes. Recent studies have tended to adopt deep learning networks to obtain improved prediction models and more accurate prediction results, which have often required significant quantities of historical continuous glucose-monitoring (CGM) data. However, for new patients with limited historical dataset, it becomes difficult to establish an acceptable deep learning network for glucose prediction. Consequently, the goal of this study was to design a novel prediction framework with instance-based and network-based deep transfer learning for cross-subject glucose prediction based on segmented CGM time series. Taking the effects of biodiversity into consideration, dynamic time warping (DTW) was applied to determine the proper source domain dataset that shared the greatest degree of similarity for new subjects. After that, a network-based deep transfer learning method was designed with cross-domain dataset to obtain a personalized model combined with improved generalization capability. In a case study, the clinical dataset demonstrated that, with additional segmented dataset from other subjects, the proposed deep transfer learning framework achieved more accurate glucose predictions for new subjects with type 2 diabetes.


2021 ◽  
Vol 924 (1) ◽  
pp. 012022
Author(s):  
Y Hendrawan ◽  
B Rohmatulloh ◽  
I Prakoso ◽  
V Liana ◽  
M R Fauzy ◽  
...  

Abstract Tempe is a traditional food originating from Indonesia, which is made from the fermentation process of soybean using Rhizopus mold. The purpose of this study was to classify three quality levels of soybean tempe i.e., fresh, consumable, and non-consumable using a convolutional neural network (CNN) based deep learning. Four types of pre-trained networks CNN were used in this study i.e. SqueezeNet, GoogLeNet, ResNet50, and AlexNet. The sensitivity analysis showed the highest quality classification accuracy of soybean tempe was 100% can be achieved when using AlexNet with SGDm optimizer and learning rate of 0.0001; GoogLeNet with Adam optimizer and learning rate 0.0001, GoogLeNet with RMSProp optimizer, and learning rate 0.0001, ResNet50 with Adam optimizer and learning rate 0.00005, ResNet50 with Adam optimizer and learning rate 0.0001, and SqueezeNet with RSMProp optimizer and learning rate 0.0001. In further testing using testing-set data, the classification accuracy based on the confusion matrix reached 98.33%. The combination of the CNN model and the low-cost digital commercial camera can later be used to detect the quality of soybean tempe with the advantages of being non-destructive, rapid, accurate, low-cost, and real-time.


Deep learning is a subset of the field of machine learning, which is a subfield of AI. The facts that differentiate deep learning networks in general from “canonical” feedforward multilayer networks are More neurons than previous networks, More complex ways of connecting layers, “Cambrian explosion” of computing power to train and Automatic feature extraction. Deep learning is defined as neural networks with a large number of parameters and layers in fundamental network architectures. Some of the network architectures are Convolutional Neural Networks, Recurrent Neural Networks Recursive Neural Networks, RCNN (Region Based CNN), Fast RCNN, Google Net, YOLO (You Only Look Once), Single Shot detectors, SegNet and GAN (Generative Adversarial Network). Different architectures work well with different types of Datasets. Object Detection is an important computer vision problem with a variety of applications. The tasks involved are classification, Object Localisation and instance segmentation. This paper will discuss how the different architectures are useful to detect the object.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Priyanka Yadlapalli ◽  
D. Bhavana ◽  
Suryanarayana Gunnam

PurposeComputed tomography (CT) scan can provide valuable information in the diagnosis of lung diseases. To detect the location of the cancerous lung nodules, this work uses novel deep learning methods. The majority of the early investigations used CT, magnetic resonance and mammography imaging. Using appropriate procedures, the professional doctor in this sector analyses these images to discover and diagnose the various degrees of lung cancer. All of the methods used to discover and detect cancer illnesses are time-consuming, expensive and stressful for the patients. To address all of these issues, appropriate deep learning approaches for analyzing these medical images, which included CT scan images, were utilized.Design/methodology/approachRadiologists currently employ chest CT scans to detect lung cancer at an early stage. In certain situations, radiologists' perception plays a critical role in identifying lung melanoma which is incorrectly detected. Deep learning is a new, capable and influential approach for predicting medical images. In this paper, the authors employed deep transfer learning algorithms for intelligent classification of lung nodules. Convolutional neural networks (VGG16, VGG19, MobileNet and DenseNet169) are used to constrain the input and output layers of a chest CT scan image dataset.FindingsThe collection includes normal chest CT scan pictures as well as images from two kinds of lung cancer, squamous and adenocarcinoma impacted chest CT scan images. According to the confusion matrix results, the VGG16 transfer learning technique has the highest accuracy in lung cancer classification with 91.28% accuracy, followed by VGG19 with 89.39%, MobileNet with 85.60% and DenseNet169 with 83.71% accuracy, which is analyzed using Google Collaborator.Originality/valueThe proposed approach using VGG16 maximizes the classification accuracy when compared to VGG19, MobileNet and DenseNet169. The results are validated by computing the confusion matrix for each network type.


Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 3142 ◽  
Author(s):  
Qizhen Zhou ◽  
Jianchun Xing ◽  
Wei Chen ◽  
Xuewei Zhang ◽  
Qiliang Yang

Gesture recognition acts as a key enabler for user-friendly human-computer interfaces (HCI). To bridge the human-computer barrier, numerous efforts have been devoted to designing accurate fine-grained gesture recognition systems. Recent advances in wireless sensing hold promise for a ubiquitous, non-invasive and low-cost system with existing Wi-Fi infrastructures. In this paper, we propose DeepNum, which enables fine-grained finger gesture recognition with only a pair of commercial Wi-Fi devices. The key insight of DeepNum is to incorporate the quintessence of deep learning-based image processing so as to better depict the influence induced by subtle finger movements. In particular, we make multiple efforts to transfer sensitive Channel State Information (CSI) into depth radio images, including antenna selection, gesture segmentation and image construction, followed by noisy image purification using high-dimensional relations. To fulfill the restrictive size requirements of deep learning model, we propose a novel region-selection method to constrain the image size and select qualified regions with dominant color and texture features. Finally, a 7-layer Convolutional Neural Network (CNN) and SoftMax function are adopted to achieve automatic feature extraction and accurate gesture classification. Experimental results demonstrate the excellent performance of DeepNum, which recognizes 10 finger gestures with overall accuracy of 98% in three typical indoor scenarios.


Sign in / Sign up

Export Citation Format

Share Document