scholarly journals Comparison of Image Texture Based Supervised Learning Classifiers for Strawberry Powdery Mildew Detection

2019 ◽  
Vol 1 (3) ◽  
pp. 434-452 ◽  
Author(s):  
Chang ◽  
Mahmud ◽  
Shin ◽  
Nguyen-Quang ◽  
Price ◽  
...  

Strawberry is an important fruit crop in Canada but powdery mildew (PM) results in about 30–70% yield loss. Detection of PM through an image texture-based system is beneficial, as it identifies the symptoms at an earlier stage and reduces labour intensive manual monitoring of crop fields. This paper presents an image texture-based disease detection algorithm using supervised classifiers. Three sites were selected to collect the leaf image data in Great Village, Nova Scotia, Canada. Images were taken under an artificial cloud condition with a Digital Single Lens Reflex (DSLR) camera as red-green-blue (RGB) raw data throughout 2017–2018 summer. Three supervised classifiers, including artificial neural networks (ANN), support vector machine (SVM), and k-nearest neighbors (kNN) were evaluated for disease detection. A total of 40 textural features were extracted using a colour co-occurrence matrix (CCM). The collected feature data were normalized, then used for training and internal, external and cross-validations of developed classifiers. Results of this study revealed that the highest overall classification accuracy was 93.81% using the ANN classifier and lowest overall accuracy was 78.80% using the kNN classifier. Results identified the ANN classifier disease detection having a lower Root Mean Square Error (RMSE) = 0.004 and Mean Absolute Error (MAE) = 0.003 values with 99.99% of accuracy during internal validation and 87.41%, 88.95% and 95.04% of accuracies during external validations with three different fields. Overall results demonstrated that an image texture-based ANN classifier was able to classify PM disease more accurately at early stages of disease development.

Sensors ◽  
2020 ◽  
Vol 20 (15) ◽  
pp. 4064
Author(s):  
Wenna Xu ◽  
Xinping Deng ◽  
Shanxin Guo ◽  
Jinsong Chen ◽  
Luyi Sun ◽  
...  

Accurate and efficient extraction of cultivated land data is of great significance for agricultural resource monitoring and national food security. Deep-learning-based classification of remote-sensing images overcomes the two difficulties of traditional learning methods (e.g., support vector machine (SVM), K-nearest neighbors (KNN), and random forest (RF)) when extracting the cultivated land: (1) the limited performance when extracting the same land-cover type with the high intra-class spectral variation, such as cultivated land with both vegetation and non-vegetation cover, and (2) the limited generalization ability for handling a large dataset to apply the model to different locations. However, the “pooling” process in most deep convolutional networks, which attempts to enlarge the sensing field of the kernel by involving the upscale process, leads to significant detail loss in the output, including the edges, gradients, and image texture details. To solve this problem, in this study we proposed a new end-to-end extraction algorithm, a high-resolution U-Net (HRU-Net), to preserve the image details by improving the skip connection structure and the loss function of the original U-Net. The proposed HRU-Net was tested in Xinjiang Province, China to extract the cultivated land from Landsat Thematic Mapper (TM) images. The result showed that the HRU-Net achieved better performance (Acc: 92.81%; kappa: 0.81; F1-score: 0.90) than the U-Net++ (Acc: 91.74%; kappa: 0.79; F1-score: 0.89), the original U-Net (Acc: 89.83%; kappa: 0.74; F1-score: 0.86), and the Random Forest model (Acc: 76.13%; kappa: 0.48; F1-score: 0.69). The robustness of the proposed model for the intra-class spectral variation and the accuracy of the edge details were also compared, and this showed that the HRU-Net obtained more accurate edge details and had less influence from the intra-class spectral variation. The model proposed in this study can be further applied to other land cover types that have more spectral diversity and require more details of extraction.


2012 ◽  
Vol 2012 ◽  
pp. 1-13 ◽  
Author(s):  
N. Gargouri ◽  
A. Dammak Masmoudi ◽  
D. Sellami Masmoudi ◽  
R. Abid

During the last decade, several works have dealt with computer automatic diagnosis (CAD) of masses in digital mammograms. Generally, the main difficulty remains the detection of masses. This work proposes an efficient methodology for mass detection based on a new local feature extraction. Local binary pattern (LBP) operator and its variants proposed by Ojala are a powerful tool for textures classification. However, it has been proved that such operators are not able to model at their own texture masses. We propose in this paper a new local pattern model named gray level and local difference (GLLD) where we take into consideration absolute gray level values as well as local difference as local binary features. Artificial neural networks (ANNs), support vector machine (SVM), and k-nearest neighbors (kNNs) are, then, used for classifying masses from nonmasses, illustrating better performance of ANN classifier. We have used 1000 regions of interest (ROIs) obtained from the Digital Database for Screening Mammography (DDSM). The area under the curve of the corresponding approach has been found to beAz=0.95for the mass detection step. A comparative study with previous approaches proves that our approach offers the best performances.


This paper deals with a simple but efficient method for detection of deadly malignant melanoma with optimized hand-crafted feature sets selected by three alternative metaheuristic algorithms, namely Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO) and Simulated Annealing (SA). Total 1898 number of features relating to lesion shapes, colors and textures are extracted from each of the 170 non-dermoscopy camera images of the popular MED-NODE dataset. This large feature set is then optimized and the number of features is reduced to up-to the range of single digit using metaheuristic algorithms as feature selector. Two well-known supervised classifiers, i.e. Support Vector Machine (SVM) and Artificial Neural Network (ANN) are used to classify malignant and benign lesions. The best classification accuracy result found by this method is 87.69% with only 7 features selected by PSO using ANN classifier which is far better than the results found in the literature so far.


Author(s):  
Mohammad Farid Naufal ◽  
Selvia Ferdiana Kusuma ◽  
Zefanya Ardya Prayuska ◽  
Ang Alexander Yoshua ◽  
Yohanes Albert Lauwoto ◽  
...  

Background: The COVID-19 pandemic remains a problem in 2021. Health protocols are needed to prevent the spread, including wearing a face mask. Enforcing people to wear face masks is tiring. AI can be used to classify images for face mask detection. There are a lot of image classification algorithm for face mask detection, but there are still no studies that compare their performance.Objective: This study aims to compare the classification algorithms of classical machine learning. They are k-nearest neighbors (KNN), support vector machine (SVM), and a widely used deep learning algorithm for image classification which is convolutional neural network (CNN) for face masks detection.Methods: This study uses 5 and 3 cross-validation for assessing the performance of KNN, SVM, and CNN in face mask detection.Results: CNN has the best average performance with the accuracy of 0.9683 and average execution time of 2,507.802 seconds for classifying 3,725 faces with mask and 3,828 faces without mask images.Conclusion: For a large amount of image data, KNN and SVM can be used as temporary algorithms in face mask detection due to their faster execution times. At the same time, CNN can be trained to form a classification model. In this case, it is advisable to use CNN for classification because it has better performance than KNN and SVM. In the future, the classification model can be implemented for automatic alert system to detect and warn people who are not wearing face masks.  


Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3707 ◽  
Author(s):  
Xianlei Long ◽  
Shenhua Hu ◽  
Yiming Hu ◽  
Qingyi Gu ◽  
Idaku Ishii

An ultra-high-speed algorithm based on Histogram of Oriented Gradient (HOG) and Support Vector Machine (SVM) for hardware implementation at 10,000 frames per second (FPS) under complex backgrounds is proposed for object detection. The algorithm is implemented on the field-programmable gate array (FPGA) in the high-speed-vision platform, in which 64 pixels are input per clock cycle. The high pixel parallelism of the vision platform limits its performance, as it is difficult to reduce the strides between detection windows below 16 pixels, thus introduce non-negligible deviation of object detection. In addition, limited by the transmission bandwidth, only one frame in every four frames can be transmitted to PC for post-processing, that is, 75% image information is wasted. To overcome the mentioned problem, a multi-frame information fusion model is proposed in this paper. Image data and synchronization signals are first regenerated according to image frame numbers. The maximum HOG feature value and corresponding coordinates of each frame are stored in the bottom of the image with that of adjacent frames’. The compensated ones will be obtained through information fusion with the confidence of continuous frames. Several experiments are conducted to demonstrate the performance of the proposed algorithm. As the evaluation result shows, the deviation is reduced with our proposed method compared with the existing one.


Algorithms ◽  
2021 ◽  
Vol 14 (7) ◽  
pp. 201
Author(s):  
Charlyn Nayve Villavicencio ◽  
Julio Jerison Escudero Macrohon ◽  
Xavier Alphonse Inbaraj ◽  
Jyh-Horng Jeng ◽  
Jer-Guang Hsieh

Early diagnosis is crucial to prevent the development of a disease that may cause danger to human lives. COVID-19, which is a contagious disease that has mutated into several variants, has become a global pandemic that demands to be diagnosed as soon as possible. With the use of technology, available information concerning COVID-19 increases each day, and extracting useful information from massive data can be done through data mining. In this study, authors utilized several supervised machine learning algorithms in building a model to analyze and predict the presence of COVID-19 using the COVID-19 Symptoms and Presence dataset from Kaggle. J48 Decision Tree, Random Forest, Support Vector Machine, K-Nearest Neighbors and Naïve Bayes algorithms were applied through WEKA machine learning software. Each model’s performance was evaluated using 10-fold cross validation and compared according to major accuracy measures, correctly or incorrectly classified instances, kappa, mean absolute error, and time taken to build the model. The results show that Support Vector Machine using Pearson VII universal kernel outweighs other algorithms by attaining 98.81% accuracy and a mean absolute error of 0.012.


Air pollution has a serious impact on human health. It occurs because of natural and man-made factors. The major contribution of this research is that it provides a comparison between different methodologies and techniques of mathematical and machine learning models. The process began with integrating data from different sources at different time interval. The preprocessing phase resulted in two different datasets: one-hour and five-minute datasets. Next, we established a forecasting model for particulate matter PM2.5, which is one of the most prevalent air pollutants and its concentration affects air quality. Additionally, we completed a multivariate analysis to predict the PM2.5 value and check the effects of other air pollutants, traffic, and weather. The algorithms used are support vector regression, k-nearest neighbors and decision tree models. The results showed that for the one-hour data set, of the three algorithms, support vector regression has the least root-mean-square error (RMSE) and also lowest value in mean absolute error (MAE). Alternatively, for the five-minute dataset, we found that the auto-regression model showed the least RMSE and MAE; however, this model only predicts short-term PM2.5.


2021 ◽  
pp. 1-17
Author(s):  
B. Janakiramaiah ◽  
G. Kalyani ◽  
L.V. Narasimha Prasad ◽  
A. Karuna ◽  
M. Krishna

Horticulture crops take a crucial part of the Indian economy by creating employment, supplying raw materials to different food processing industries. Mangoes are one of the major crops in horticulture. General Infections in Mango trees are common by various climatic and fungal infections, which became a cause for reducing the quality and quantity of the mangos. The most common diseases with bacterial infection are anthracnose and Powdery Mildew. In recent years, it has been perceived that different variants of deep learning architectures are proposed for detecting and classifying the problems in the agricultural domain. The Convolutional Neural Network (CNN) based architectures have performed amazingly well for disease detection in plants but at the same time lacks rotational or spatial invariance. A relatively new neural organization called Capsule Network (CapsNet) addresses these limitations of CNN architectures. Hence, in this work, a variant of CapsNet called Multilevel CapsNet is introduced to characterize the mango leaves tainted by the anthracnose and powdery mildew diseases. The proposed architecture of this work is validated on a dataset of mango leaves collected in the natural environment. The dataset comprises both healthy and contaminated leaf pictures. The test results approved the undeniable level of exactness of the proposed framework for the characterization of mango leaf diseases with an accuracy of 98.5%. The outcomes conceive the higher-order precision of the proposed Multi-level CapsNet model when contrasted with the other classification algorithms such as Support Vector Machine (SVM) and CNNs.


Author(s):  
Yanjun Sun ◽  
Xuanjing Shen ◽  
Changming Liu ◽  
Yongzhe Zhao

With the rapid development of digital phones, the digital image forensics system in current times has had a great impact. It will lead to a serious threat for us, and especially the emergence of the recaptured image makes the existing digital image forensics algorithm invalid. So, it needs an effective image detection algorithm for us to identify recaptured images. In this paper, a new detection algorithm of the recaptured image is presented based on gray level co-occurrence matrix by analyzing the differences between the real and recaptured images. In order to analyze the differences, a new image evaluation model was put forward in this paper, which is called image variance ratio. Firstly, the algorithm proposed extracted high-frequency and low-frequency information of images by wavelet transform, based on which we calculated the relative gray level co-occurrence matrices. Secondly, the features of gray level co-occurrence matrix were extracted. At last, the recaptured image was classified by the support vector machine according to the features. The experimental results showed the algorithm proposed can not only effectively identify the recaptured image obtained from different media but also have better identification rate.


Author(s):  
Aliyu Muhammad Abdu ◽  
Musa Mohd Muhammad Mokji ◽  
Usman Ullah Ullah Sheikh

Image-based plant disease detection is among the essential activities in precision agriculture for observing incidence and measuring the severity of variability in crops. 70% to 80% of the variabilities are attributed to diseases caused by pathogens, and 60% to 70% appear on the leaves in comparison to the stem and fruits. This work provides a comparative analysis through the model implementation of the two renowned machine learning models, the support vector machine (SVM) and deep learning (DL), for plant disease detection using leaf image data. Until recently, most of these image processing techniques had been, and some still are, exploiting what some considered as "shallow" machine learning architectures. The DL network is fast becoming the benchmark for research in the field of image recognition and pattern analysis. Regardless, there is a lack of studies concerning its application in plant leaves disease detection. Thus, both models have been implemented in this research on a large plant leaf disease image dataset using standard settings and in consideration of the three crucial factors of architecture, computational power, and amount of training data to compare the duos. Results obtained indicated scenarios by which each model best performs in this context, and within a particular domain of factors suggests improvements and which model would be more preferred. It is also envisaged that this research would provide meaningful insight into the critical current and future role of machine learning in food security


Sign in / Sign up

Export Citation Format

Share Document