scholarly journals Few-shot cotton leaf spots disease classification based on metric learning

Author(s):  
Xihuizi Liang

Abstract Background: Cotton diceases seriously affect the yield and quality of cotton. The type of pest or disease suffered by cotton can be determined by the disease spots on the cotton leaves. This paper presents a small-sample learning framework that can be used for cotton leaf disease spot classification task, which using deep learning techniques is constructed based on a metric learning approach, to prevent and control cotton diseases timely. First, disease spots on cotton leaf's disease images are segmented by different methods, compared by using support vector machine (SVM) method and threshold segmentation, and discussed the suitable one. With segmented disease spot images as input, a disease spot dataset is established, and the cotton leaf disease spots were classified using a classical convolutional neural network classifier, the structure and framework of convolutional neural network had been designed, and the setting of relevant parameters and the detailed network structure configuration are analyzed according to the experimental environment. The features of two different images are extracted by a parallel two-way convolutional neural network with weight sharing. Then, the network uses a loss function to learn the metric space, in which similar leaf samples are close to each other and different leaf samples are far away from each other. Results: To achieve the classification of cotton leaf spots by small sample learning, this paper constructs a metric-based learning method to extract cotton leaf spot features and classify the leaves. In the process of leaf spot extraction, image segmentation of the spots is performed by threshold segmentation and SVM, and comparative analysis is performed. In the process of leaf spot classification, the structural framework of leaf spot feature extractor and feature classifier is constructed, and the overall framework is built using the idea of two-way parallel convolutional neural network. A variety of excellent convolutional neural network feature extractors such as Vgg, DesenNet, and ResNet were used for feature extraction work, and a combination design based on the small sample classification framework was performed and compared. Experimentally, it is demonstrated that the classification accuracy is improved by nearly 7.7% on average for different number of samples in the case of using this optimizer. S-DesneNet have the highest accuracy. When n is 5, 10, 15 and 20, the accuracy is 58.63%, 84.41% ,92.51% and 91.75%, respectively, and the average accuracy is improved by nearly 7.7% compared with DenseNet. Conclusions: To solve the problem of classification accuracy degradation due to small number of samples in small sample training tasks, a spatial structure optimizer (SSO) acting on the training process is proposed for this purpose.

Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Xihuizi Liang

Abstract Background Cotton diceases seriously affect the yield and quality of cotton. The type of pest or disease suffered by cotton can be determined by the disease spots on the cotton leaves. This paper presents a few-shot learning framework that can be used for cotton leaf disease spot classification task. This can be used in preventing and controlling cotton diseases timely. First, disease spots on cotton leaf’s disease images are segmented by different methods, compared by using support vector machine (SVM) method and threshold segmentation, and discussed the suitable one. Then, with segmented disease spot images as input, a disease spot dataset is established, and the cotton leaf disease spots were classified using a classical convolutional neural network classifier, the structure and framework of convolutional neural network had been designed. At last, the features of two different images are extracted by a parallel two-way convolutional neural network with weight sharing. Then, the network uses a loss function to learn the metric space, in which similar leaf samples are close to each other and different leaf samples are far away from each other. In summary, this work can be regarded as a significang reference and the benchmark comparison for the follow-up studies of few-shot learning tasks in the agricultural field. Results To achieve the classification of cotton leaf spots by small sample learning, a metric-based learning method was developed to extract cotton leaf spot features and classify the sick leaves. The threshold segmentation and SVM were compared in the extracting of leaf spot. The results showed that both of these two method can extract the leaf spot in a good performance, SVM expented more time, but the leaf spot which extracted from SVM was much more suitable for classifying, thus SVM method can retain much more information of leaf spot, such as color, shape, textures, ect, which can help classficating the leaf spot. In the process of leaf spot classification, the two-way parallel convolutional neural network was established for building the leaf spot feature extractor, and feature classifier is constructed. After establishing the metric space, KNN was used as the spot classifier, and for the construction of convolutional neural networks, commonly used models were selected for comparison, and a spatial structure optimizer (SSO) is introduced for local optimization of the model, include Vgg, DesenNet, and ResNet. Experimentally, it is demonstrated that the classification accuracy of DenseNet is the highest, compared to the other two networks, and the classification accuracy of S-DenseNet is 7.7% higher then DenseNet on average for different number of steps. Conclusions As the step increasing, the accuracy of DesenNet, and ResNet are all improved, and after using SSO, each of these neural networks can achieved better performance. But The extent of increase varies, DesenNet with SSO had been improved the most obviously.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Maohua Guo ◽  
Jinlong Fei ◽  
Yitong Meng

By website fingerprinting (WF) technologies, local listeners are enabled to track the specific website visited by users through an investigation of the encrypted traffic between the users and the Tor network entry node. The current triplet fingerprinting (TF) technique proved the possibility of small sample WF attacks. Previous research methods only concentrate on extracting the overall features of website traffic while ignoring the importance of website local fingerprinting characteristics for small sample WF attacks. Thus, in the present paper, a deep nearest neighbor website fingerprinting (DNNF) attack technology is proposed. The deep local fingerprinting features of websites are extracted via the convolutional neural network (CNN), and then the k-nearest neighbor (k-NN) classifier is utilized to classify the prediction. When the website provides only 20 samples, the accuracy can reach 96.2%. We also found that the DNNF method acts well compared to the traditional methods in coping with transfer learning and concept drift problems. In comparison to the TF method, the classification accuracy of the proposed method is improved by 2%–5% and it is only dropped by 3% when classifying the data collected from the same website after two months. These experiments revealed that the DNNF is a more flexible, efficient, and robust website fingerprinting attack technology, and the local fingerprinting features of websites are particularly important for small sample WF attacks.


2018 ◽  
Vol 89 (17) ◽  
pp. 3539-3555 ◽  
Author(s):  
Bing Wei ◽  
Kuangrong Hao ◽  
Xue-song Tang ◽  
Yongsheng Ding

The convolutional neural network (CNN) has recently achieved great breakthroughs in many computer vision tasks. However, its application in fabric texture defects classification has not been thoroughly researched. To this end, this paper carries out a research on its application based on the CNN model. Meanwhile, since the CNN cannot achieve good classification accuracy in small sample sizes, a new method combining compressive sensing and the convolutional neural network (CS-CNN) is proposed. Specifically, this paper uses the compressive sampling theorem to compress and augment the data in small sample sizes; then the CNN can be employed to classify the data features directly from compressive sampling; finally, we use the test data to verify the classification performance of the method. The explanatory experimental results demonstrate that, in comparison with the state-of-the-art methods for running time, our CS-CNN approach can effectively improve the classification accuracy in fabric defect samples, even with a small number of defect samples.


Author(s):  
Wanli Wang ◽  
Botao Zhang ◽  
Kaiqi Wu ◽  
Sergey A Chepinskiy ◽  
Anton A Zhilenkov ◽  
...  

In this paper, a hybrid method based on deep learning is proposed to visually classify terrains encountered by mobile robots. Considering the limited computing resource on mobile robots and the requirement for high classification accuracy, the proposed hybrid method combines a convolutional neural network with a support vector machine to keep a high classification accuracy while improve work efficiency. The key idea is that the convolutional neural network is used to finish a multi-class classification and simultaneously the support vector machine is used to make a two-class classification. The two-class classification performed by the support vector machine is aimed at one kind of terrain that users are mostly concerned with. Results of the two classifications will be consolidated to get the final classification result. The convolutional neural network used in this method is modified for the on-board usage of mobile robots. In order to enhance efficiency, the convolutional neural network has a simple architecture. The convolutional neural network and the support vector machine are trained and tested by using RGB images of six kinds of common terrains. Experimental results demonstrate that this method can help robots classify terrains accurately and efficiently. Therefore, the proposed method has a significant potential for being applied to the on-board usage of mobile robots.


2021 ◽  
Vol 13 (3) ◽  
pp. 335
Author(s):  
Yuhao Qing ◽  
Wenyi Liu

In recent years, image classification on hyperspectral imagery utilizing deep learning algorithms has attained good results. Thus, spurred by that finding and to further improve the deep learning classification accuracy, we propose a multi-scale residual convolutional neural network model fused with an efficient channel attention network (MRA-NET) that is appropriate for hyperspectral image classification. The suggested technique comprises a multi-staged architecture, where initially the spectral information of the hyperspectral image is reduced into a two-dimensional tensor, utilizing a principal component analysis (PCA) scheme. Then, the constructed low-dimensional image is input to our proposed ECA-NET deep network, which exploits the advantages of its core components, i.e., multi-scale residual structure and attention mechanisms. We evaluate the performance of the proposed MRA-NET on three public available hyperspectral datasets and demonstrate that, overall, the classification accuracy of our method is 99.82 %, 99.81%, and 99.37, respectively, which is higher compared to the corresponding accuracy of current networks such as 3D convolutional neural network (CNN), three-dimensional residual convolution structure (RES-3D-CNN), and space–spectrum joint deep network (SSRN).


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Saad Albawi ◽  
Oguz Bayat ◽  
Saad Al-Azawi ◽  
Osman N. Ucan

Recently, social touch gesture recognition has been considered an important topic for touch modality, which can lead to highly efficient and realistic human-robot interaction. In this paper, a deep convolutional neural network is selected to implement a social touch recognition system for raw input samples (sensor data) only. The touch gesture recognition is performed using a dataset previously measured with numerous subjects that perform varying social gestures. This dataset is dubbed as the corpus of social touch, where touch was performed on a mannequin arm. A leave-one-subject-out cross-validation method is used to evaluate system performance. The proposed method can recognize gestures in nearly real time after acquiring a minimum number of frames (the average range of frame length was from 0.2% to 4.19% from the original frame lengths) with a classification accuracy of 63.7%. The achieved classification accuracy is competitive in terms of the performance of existing algorithms. Furthermore, the proposed system outperforms other classification algorithms in terms of classification ratio and touch recognition time without data preprocessing for the same dataset.


Sign in / Sign up

Export Citation Format

Share Document