scholarly journals Corn Disease Classification Using Transfer Learning and Convolutional Neural Network

2021 ◽  
Vol 9 (2) ◽  
pp. 211
Author(s):  
Faisal Dharma Adhinata ◽  
Gita Fadila Fitriana ◽  
Aditya Wijayanto ◽  
Muhammad Pajar Kharisma Putra

Indonesia is an agricultural country with abundant agricultural products. One of the crops used as a staple food for Indonesians is corn. This corn plant must be protected from diseases so that the quality of corn harvest can be optimal. Early detection of disease in corn plants is needed so that farmers can provide treatment quickly and precisely. Previous research used machine learning techniques to solve this problem. The results of the previous research were not optimal because the amount of data used was slightly and less varied. Therefore, we propose a technique that can process lots and varied data, hoping that the resulting system is more accurate than the previous research. This research uses transfer learning techniques as feature extraction combined with Convolutional Neural Network as a classification. We analysed the combination of DenseNet201 with a Flatten or Global Average Pooling layer. The experimental results show that the accuracy produced by the combination of DenseNet201 with the Global Average Pooling layer is better than DenseNet201 with Flatten layer. The accuracy obtained is 93% which proves the proposed system is more accurate than previous studies.

Author(s):  
Aires Da Conceicao ◽  
Sheshang D. Degadwala

Self-driving vehicle is a vehicle that can drive by itself it means without human interaction. This system shows how the computer can learn and the over the art of driving using machine learning techniques. This technique includes line lane tracker, robust feature extraction and convolutional neural network.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Josh Schaefferkoetter ◽  
Jianhua Yan ◽  
Claudia Ortega ◽  
Andrew Sertic ◽  
Eli Lechtman ◽  
...  

Abstract Goal PET is a relatively noisy process compared to other imaging modalities, and sparsity of acquisition data leads to noise in the images. Recent work has focused on machine learning techniques to improve PET images, and this study investigates a deep learning approach to improve the quality of reconstructed image volumes through denoising by a 3D convolution neural network. Potential improvements were evaluated within a clinical context by physician performance in a reading task. Methods A wide range of controlled noise levels was emulated from a set of chest PET data in patients with lung cancer, and a convolutional neural network was trained to denoise the reconstructed images using the full-count reconstructions as the ground truth. The benefits, over conventional Gaussian smoothing, were quantified across all noise levels by observer performance in an image ranking and lesion detection task. Results The CNN-denoised images were generally ranked by the physicians equal to or better than the Gaussian-smoothed images for all count levels, with the largest effects observed in the lowest-count image sets. For the CNN-denoised images, overall lesion contrast recovery was 60% and 90% at the 1 and 20 million count levels, respectively. Notwithstanding the reduced lesion contrast recovery in noisy data, the CNN-denoised images also yielded better lesion detectability in low count levels. For example, at 1 million true counts, the average true positive detection rate was around 40% for the CNN-denoised images and 30% for the smoothed images. Conclusion Significant improvements were found for CNN-denoising for very noisy images, and to some degree for all noise levels. The technique presented here offered however limited benefit for detection performance for images at the count levels routinely encountered in the clinic.


Author(s):  
Nik Noor Akmal Abdul Hamid ◽  
Rabiatul Adawiya Razali ◽  
Zaidah Ibrahim

This paper presents a comparative study between Bag of Features (BoF), Conventional Convolutional Neural Network (CNN) and Alexnet for fruit recognition.  Automatic fruit recognition can minimize human intervention in their fruit harvesting operations, operation time and harvesting cost.  On the other hand, this task is very challenging because of the similarities in shapes, colours and textures among various types of fruits. Thus, a robust technique that can produce good result is necessary. Due to the outstanding performance of deep learning like CNN and its pre-trained models like AlexNet in image recognition, this paper investigates the accuracy of conventional CNN, and Alexnet in recognizing thirty different types of fruits from a publicly available dataset.  Besides that, the recognition performance of BoF is also examined since it is one of the machine learning techniques that achieves good result in object recognition.   The experimental results indicate that all of these three techniques produce excellent recognition accuracy. Furthermore, conventional CNN achieves the fastest recognition result compared to BoF, and Alexnet.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Pankaj Kumar ◽  
Bhavna Bajpai ◽  
Deepak Omprakash Gupta ◽  
Dinesh C. Jain ◽  
S. Vimal

Purpose The purpose of this study/paper To focus on finding COVID-19 with the help of DarkCovidNet architecture on patient images. Design/methodology/approach We used machine learning techniques with convolutional neural network. Findings Detecting COVID-19 symptoms from patient CT scan images. Originality/value This paper contains a new architecture for detecting COVID-19 symptoms from patient computed tomography scan images.


Computers ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 46 ◽  
Author(s):  
Markus-Oliver Tamm ◽  
Yar Muhammad ◽  
Naveed Muhammad

Imagined speech is a relatively new electroencephalography (EEG) neuro-paradigm, which has seen little use in Brain-Computer Interface (BCI) applications. Imagined speech can be used to allow physically impaired patients to communicate and to use smart devices by imagining desired commands and then detecting and executing those commands in a smart device. The goal of this research is to verify previous classification attempts made and then design a new, more efficient neural network that is noticeably less complex (fewer number of layers) that still achieves a comparable classification accuracy. The classifiers are designed to distinguish between EEG signal patterns corresponding to imagined speech of different vowels and words. This research uses a dataset that consists of 15 subjects imagining saying the five main vowels (a, e, i, o, u) and six different words. Two previous studies on imagined speech classifications are verified as those studies used the same dataset used here. The replicated results are compared. The main goal of this study is to take the proposed convolutional neural network (CNN) model from one of the replicated studies and make it much more simpler and less complex, while attempting to retain a similar accuracy. The pre-processing of data is described and a new CNN classifier with three different transfer learning methods is described and used to classify EEG signals. Classification accuracy is used as the performance metric. The new proposed CNN, which uses half as many layers and less complex pre-processing methods, achieved a considerably lower accuracy, but still managed to outperform the initial model proposed by the authors of the dataset by a considerable margin. It is recommended that further studies investigating classifying imagined speech should use more data and more powerful machine learning techniques. Transfer learning proved beneficial and should be used to improve the effectiveness of neural networks.


2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Chanqin Quan ◽  
Lei Hua ◽  
Xiao Sun ◽  
Wenjun Bai

The plethora of biomedical relations which are embedded in medical logs (records) demands researchers’ attention. Previous theoretical and practical focuses were restricted on traditional machine learning techniques. However, these methods are susceptible to the issues of “vocabulary gap” and data sparseness and the unattainable automation process in feature extraction. To address aforementioned issues, in this work, we propose a multichannel convolutional neural network (MCCNN) for automated biomedical relation extraction. The proposed model has the following two contributions: (1) it enables the fusion of multiple (e.g., five) versions in word embeddings; (2) the need for manual feature engineering can be obviated by automated feature learning with convolutional neural network (CNN). We evaluated our model on two biomedical relation extraction tasks: drug-drug interaction (DDI) extraction and protein-protein interaction (PPI) extraction. For DDI task, our system achieved an overall f-score of 70.2% compared to the standard linear SVM based system (e.g., 67.0%) on DDIExtraction 2013 challenge dataset. And for PPI task, we evaluated our system on Aimed and BioInfer PPI corpus; our system exceeded the state-of-art ensemble SVM system by 2.7% and 5.6% on f-scores.


A vast number of image processing and neural network approaches are currently being utilized in the analysis of various medical conditions. Malaria is a disease which can be diagnosed by examining blood smears. But when it is examined manually by the microscopist, the accuracy of diagnosis can be error-prone because it depends upon the quality of the smear and the expertise of microscopist in examining the smears. Among the various machine learning techniques, convolutional neural networks (CNN) promise relatively higher accuracy. We propose an Optimized Step-Increase CNN (OSICNN) model to classify red blood cell images taken from thin blood smear samples into infected and non-infected with the malaria parasite. The proposed OSICNN model consists of four convolutional layers and is showing comparable results when compared with other state of the art models. The accuracy of identifying parasite in RBC has been found to be 98.3% with the proposed model.


Author(s):  
Aires Da Conceicao ◽  
Sheshang Degadwala

Self driving vehicle is a vehicle that can drive by itself it means without human interaction . This system shows how the computer can learn and the over the art of driving using machine learning techniques. Therefore for a car achieving the autonomous ability it must show the control of human activities while driving. Those activities include control of steering wheel. There exist different techniques to control the steering angle and one of them is CNN. In this article we are going to see how CNN can be used to predict the steering angle.


Sign in / Sign up

Export Citation Format

Share Document