scholarly journals Image-Based Hot Pepper Disease and Pest Diagnosis Using Transfer Learning and Fine-Tuning

2021 ◽  
Vol 12 ◽  
Author(s):  
Yeong Hyeon Gu ◽  
Helin Yin ◽  
Dong Jin ◽  
Jong-Han Park ◽  
Seong Joon Yoo

Past studies of plant disease and pest recognition used classification methods that presented a singular recognition result to the user. Unfortunately, incorrect recognition results may be output, which may lead to further crop damage. To address this issue, there is a need for a system that suggest several candidate results and allow the user to make the final decision. In this study, we propose a method for diagnosing plant diseases and identifying pests using deep features based on transfer learning. To extract deep features, we employ pre-trained VGG and ResNet 50 architectures based on the ImageNet dataset, and output disease and pest images similar to a query image via a k-nearest-neighbor algorithm. In this study, we use a total of 23,868 images of 19 types of hot-pepper diseases and pests, for which, the proposed model achieves accuracies of 96.02 and 99.61%, respectively. We also measure the effects of fine-tuning and distance metrics. The results show that the use of fine-tuning-based deep features increases accuracy by approximately 0.7–7.38%, and the Bray–Curtis distance achieves an accuracy of approximately 0.65–1.51% higher than the Euclidean distance.

2021 ◽  
Vol 11 (1) ◽  
pp. 491-508
Author(s):  
Monika Lamba ◽  
Yogita Gigras ◽  
Anuradha Dhull

Abstract Detection of plant disease has a crucial role in better understanding the economy of India in terms of agricultural productivity. Early recognition and categorization of diseases in plants are very crucial as it can adversely affect the growth and development of species. Numerous machine learning methods like SVM (support vector machine), random forest, KNN (k-nearest neighbor), Naïve Bayes, decision tree, etc., have been exploited for recognition, discovery, and categorization of plant diseases; however, the advancement of machine learning by DL (deep learning) is supposed to possess tremendous potential in enhancing the accuracy. This paper proposed a model comprising of Auto-Color Correlogram as image filter and DL as classifiers with different activation functions for plant disease. This proposed model is implemented on four different datasets to solve binary and multiclass subcategories of plant diseases. Using the proposed model, results achieved are better, obtaining 99.4% accuracy and 99.9% sensitivity for binary class and 99.2% accuracy for multiclass. It is proven that the proposed model outperforms other approaches, namely LibSVM, SMO (sequential minimal optimization), and DL with activation function softmax and softsign in terms of F-measure, recall, MCC (Matthews correlation coefficient), specificity and sensitivity.


Agriculture ◽  
2020 ◽  
Vol 10 (10) ◽  
pp. 439 ◽  
Author(s):  
Helin Yin ◽  
Yeong Hyeon Gu ◽  
Chang-Jin Park ◽  
Jong-Han Park ◽  
Seong Joon Yoo

The use of conventional classification techniques to recognize diseases and pests can lead to an incorrect judgment on whether crops are diseased or not. Additionally, hot pepper diseases, such as “anthracnose” and “bacterial spot” can be erroneously judged, leading to incorrect disease recognition. To address these issues, multi-recognition methods, such as Google Cloud Vision, suggest multiple disease candidates and allow the user to make the final decision. Similarity-based image search techniques, along with multi-recognition, can also be used for this purpose. Content-based image retrieval techniques have been used in several conventional similarity-based image searches, using descriptors to extract features such as the image color and edge. In this study, we use eight pre-trained deep learning models (VGG16, VGG19, Resnet 50, etc.) to extract the deep features from images. We conducted experiments using 28,011 image data of 34 types of hot pepper diseases and pests. The search results for diseases and pests were similar to query images with deep features using the k-nearest neighbor method. In top-1 to top-5, when using the deep features based on the Resnet 50 model, we achieved recognition accuracies of approximately 88.38–93.88% for diseases and approximately 95.38–98.42% for pests. When using the deep features extracted from the VGG16 and VGG19 models, we recorded the second and third highest performances, respectively. In the top-10 results, when using the deep features extracted from the Resnet 50 model, we achieved accuracies of 85.6 and 93.62% for diseases and pests, respectively. As a result of performance comparison between the proposed method and the simple convolutional neural network (CNN) model, the proposed method recorded 8.62% higher accuracy in diseases and 14.86% higher in pests than the CNN classification model.


Author(s):  
Priti Bansal ◽  
Sumit Kumar ◽  
Ritesh Srivastava ◽  
Saksham Agarwal

The deadliest form of skin cancer is melanoma, and if detected in time, it is curable. Detection of melanoma using biopsy is a painful and time-consuming task. Alternate means are being used by medical experts to diagnose melanoma by extracting features from skin lesion images. Medical image diagnosis requires intelligent systems. Many intelligent systems based on image processing and machine learning have been proposed by researchers in the past to detect different kinds of diseases that are successfully used by healthcare organisations worldwide. Intelligent systems to detect melanoma from skin lesion images are also evolving with the aim of improving the accuracy of melanoma detection. Feature extraction plays a critical role. In this paper, a model is proposed in which features are extracted using convolutional neural network (CNN) with transfer learning and a hierarchical classifier consisting of random forest (RF), k-nearest neighbor (KNN), and adaboost is used to detect melanoma using the extracted features. Experimental results show the effectiveness of the proposed model.


2020 ◽  
Vol 9 (2) ◽  
pp. 100-110
Author(s):  
Ahmad Mustafid ◽  
Muhammad Murah Pamuji ◽  
Siti Helmiyah

Deep Learning is an essential technique in the classification problem in machine learning based on artificial neural networks. The general issue in deep learning is data-hungry, which require a plethora of data to train some model. Wayang is a shadow puppet art theater from Indonesia, especially in the Javanese culture. It has several indistinguishable characters. In this paper, We tried proposing some steps and techniques on how to classify the characters and handle the issue on a small wayang dataset by using model selection, transfer learning, and fine-tuning to obtain efficient and precise accuracy on our classification problem. The research used 50 images for each class and a total of 24 wayang characters classes. We collected and implemented various architectures from the initial version of deep learning to the latest proposed model and their state-of-art. The transfer learning and fine-tuning method showed a significant increase in accuracy, validation accuracy. By using Transfer Learning, it was possible to design the deep learning model with good classifiers within a short number of times on a small dataset. It performed 100% on their training on both EfficientNetB0 and MobileNetV3-small. On validation accuracy, gave 98.33% and 98.75%, respectively.


2021 ◽  
Vol 7 ◽  
pp. e557
Author(s):  
Priyal Sobti ◽  
Anand Nayyar ◽  
Niharika ◽  
Preeti Nagrath

Convolutional neural network is widely used to perform the task of image classification, including pretraining, followed by fine-tuning whereby features are adapted to perform the target task, on ImageNet. ImageNet is a large database consisting of 15 million images belonging to 22,000 categories. Images collected from the Web are labeled using Amazon Mechanical Turk crowd-sourcing tool by human labelers. ImageNet is useful for transfer learning because of the sheer volume of its dataset and the number of object classes available. Transfer learning using pretrained models is useful because it helps to build computer vision models in an accurate and inexpensive manner. Models that have been pretrained on substantial datasets are used and repurposed for our requirements. Scene recognition is a widely used application of computer vision in many communities and industries, such as tourism. This study aims to show multilabel scene classification using five architectures, namely, VGG16, VGG19, ResNet50, InceptionV3, and Xception using ImageNet weights available in the Keras library. The performance of different architectures is comprehensively compared in the study. Finally, EnsemV3X is presented in this study. The proposed model with reduced number of parameters is superior to state-of-of-the-art models Inception and Xception because it demonstrates an accuracy of 91%.


2019 ◽  
Vol 3 (Special Issue on First SACEE'19) ◽  
pp. 165-172
Author(s):  
Vincenzo Bianco ◽  
Giorgio Monti ◽  
Nicola Pio Belfiore

The use of friction pendulum devices has recently attracted the attention of both academic and professional engineers for the protection of structures in seismic areas. Although the effectiveness of these has been shown by the experimental testing carried out worldwide, many aspects still need to be investigated for further improvement and optimisation. A thermo-mechanical model of a double friction pendulum device (based on the most recent modelling techniques adopted in multibody dynamics) is presented in this paper. The proposed model is based on the observation that sliding may not take place as ideally as is indicated in the literature. On the contrary, the fulfilment of geometrical compatibility between the constitutive bodies (during an earthquake) suggests a very peculiar dynamic behaviour composed of a continuous alternation of sticking and slipping phases. The thermo-mechanical model of a double friction pendulum device (based on the most recent modelling techniques adopted in multibody dynamics) is presented. The process of fine-tuning of the selected modelling strategy (available to date) is also described.


Sensors ◽  
2019 ◽  
Vol 19 (22) ◽  
pp. 4850 ◽  
Author(s):  
Carlos S. Pereira ◽  
Raul Morais ◽  
Manuel J. C. S. Reis

Frequently, the vineyards in the Douro Region present multiple grape varieties per parcel and even per row. An automatic algorithm for grape variety identification as an integrated software component was proposed that can be applied, for example, to a robotic harvesting system. However, some issues and constraints in its development were highlighted, namely, the images captured in natural environment, low volume of images, high similarity of the images among different grape varieties, leaf senescence, and significant changes on the grapevine leaf and bunch images in the harvest seasons, mainly due to adverse climatic conditions, diseases, and the presence of pesticides. In this paper, the performance of the transfer learning and fine-tuning techniques based on AlexNet architecture were evaluated when applied to the identification of grape varieties. Two natural vineyard image datasets were captured in different geographical locations and harvest seasons. To generate different datasets for training and classification, some image processing methods, including a proposed four-corners-in-one image warping algorithm, were used. The experimental results, obtained from the application of an AlexNet-based transfer learning scheme and trained on the image dataset pre-processed through the four-corners-in-one method, achieved a test accuracy score of 77.30%. Applying this classifier model, an accuracy of 89.75% on the popular Flavia leaf dataset was reached. The results obtained by the proposed approach are promising and encouraging in helping Douro wine growers in the automatic task of identifying grape varieties.


Author(s):  
Irfan Ullah Khan ◽  
Nida Aslam ◽  
Malak Aljabri ◽  
Sumayh S. Aljameel ◽  
Mariam Moataz Aly Kamaleldin ◽  
...  

The COVID-19 outbreak is currently one of the biggest challenges facing countries around the world. Millions of people have lost their lives due to COVID-19. Therefore, the accurate early detection and identification of severe COVID-19 cases can reduce the mortality rate and the likelihood of further complications. Machine Learning (ML) and Deep Learning (DL) models have been shown to be effective in the detection and diagnosis of several diseases, including COVID-19. This study used ML algorithms, such as Decision Tree (DT), Logistic Regression (LR), Random Forest (RF), Extreme Gradient Boosting (XGBoost), and K-Nearest Neighbor (KNN) and DL model (containing six layers with ReLU and output layer with sigmoid activation), to predict the mortality rate in COVID-19 cases. Models were trained using confirmed COVID-19 patients from 146 countries. Comparative analysis was performed among ML and DL models using a reduced feature set. The best results were achieved using the proposed DL model, with an accuracy of 0.97. Experimental results reveal the significance of the proposed model over the baseline study in the literature with the reduced feature set.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Young Jae Kim ◽  
Jang Pyo Bae ◽  
Jun-Won Chung ◽  
Dong Kyun Park ◽  
Kwang Gi Kim ◽  
...  

AbstractWhile colorectal cancer is known to occur in the gastrointestinal tract. It is the third most common form of cancer of 27 major types of cancer in South Korea and worldwide. Colorectal polyps are known to increase the potential of developing colorectal cancer. Detected polyps need to be resected to reduce the risk of developing cancer. This research improved the performance of polyp classification through the fine-tuning of Network-in-Network (NIN) after applying a pre-trained model of the ImageNet database. Random shuffling is performed 20 times on 1000 colonoscopy images. Each set of data are divided into 800 images of training data and 200 images of test data. An accuracy evaluation is performed on 200 images of test data in 20 experiments. Three compared methods were constructed from AlexNet by transferring the weights trained by three different state-of-the-art databases. A normal AlexNet based method without transfer learning was also compared. The accuracy of the proposed method was higher in statistical significance than the accuracy of four other state-of-the-art methods, and showed an 18.9% improvement over the normal AlexNet based method. The area under the curve was approximately 0.930 ± 0.020, and the recall rate was 0.929 ± 0.029. An automatic algorithm can assist endoscopists in identifying polyps that are adenomatous by considering a high recall rate and accuracy. This system can enable the timely resection of polyps at an early stage.


Sign in / Sign up

Export Citation Format

Share Document