Image Recognition of Rapeseed Pests Based on Random Forest Classifier

Author(s):  
Li Zhu ◽  
Minghu Wu ◽  
Xiangkui Wan ◽  
Nan Zhao ◽  
Wei Xiong

Rapeseed pests will result in a rapeseed production reduction. The accurate identification of rapeseed pests is the foundation for the optimal opportunity for treatment and the use of pesticide pertinently. Manual recognition is labour-intensive and strong subjective. This paper propsed a image recognition method of rapeseed pests based on the color characteristics. The GrabCut algorithm is adopted to segment the foreground from the image of the pets. The noise with small area is filtered out. The benchmark images is obtained from the minimum enclosing rectangle of the rapeseed pests. Two types of color feature description of pests is adopt, one is the three order color moments of the normalized H/S channel; the other is the cross matching index calculated by the reverse projection of the color histogram. A multi-dimensional vector, which is used to train the random forest classifier, is extracted from the color feature of the benchmark image. The recognition results can be obtained by inputing the color features of the image to be detected to the random forest classifier and training.The experiment showed that the proposed method may identify five kinds of rapeseed accurately such as erythema, cabbage caterpillar, colaphellus bowringii baly, flea beetle and aphid with the recognition rate of 96%.

2020 ◽  
Vol 17 (9) ◽  
pp. 4654-4659
Author(s):  
Kamlesh Kumari ◽  
Sanjeev Rana

This research paper’s main motive is to improve the recognition rate of Offline Signature Verification system. In our research, decisions of three classifiers i.e., Multilayer Perceptron, Random Forest Classifier and Naive Bayes are combined using voting classifier to determine the output. The softwares used for this research are WEKA and Matlab. Performance of this approach is tested on CEDAR dataset for writer dependent model. Overall recognition rate of whole dataset of 55 users is 91.25%. Out of the dataset, the recognition rate of 45 users is above 85%.


2018 ◽  
Vol 10 (5) ◽  
pp. 1-12
Author(s):  
B. Nassih ◽  
A. Amine ◽  
M. Ngadi ◽  
D. Naji ◽  
N. Hmina

2019 ◽  
Vol 13 (2) ◽  
pp. 136-141 ◽  
Author(s):  
Abhisek Sethy ◽  
Prashanta Kumar Patra ◽  
Deepak Ranjan Nayak

Background: In the past decades, handwritten character recognition has received considerable attention from researchers across the globe because of its wide range of applications in daily life. From the literature, it has been observed that there is limited study on various handwritten Indian scripts and Odia is one of them. We revised some of the patents relating to handwritten character recognition. Methods: This paper deals with the development of an automatic recognition system for offline handwritten Odia character recognition. In this case, prior to feature extraction from images, preprocessing has been done on the character images. For feature extraction, first the gray level co-occurrence matrix (GLCM) is computed from all the sub-bands of two-dimensional discrete wavelet transform (2D DWT) and thereafter, feature descriptors such as energy, entropy, correlation, homogeneity, and contrast are calculated from GLCMs which are termed as the primary feature vector. In order to further reduce the feature space and generate more relevant features, principal component analysis (PCA) has been employed. Because of the several salient features of random forest (RF) and K- nearest neighbor (K-NN), they have become a significant choice in pattern classification tasks and therefore, both RF and K-NN are separately applied in this study for segregation of character images. Results: All the experiments were performed on a system having specification as windows 8, 64-bit operating system, and Intel (R) i7 – 4770 CPU @ 3.40 GHz. Simulations were conducted through Matlab2014a on a standard database named as NIT Rourkela Odia Database. Conclusion: The proposed system has been validated on a standard database. The simulation results based on 10-fold cross-validation scenario demonstrate that the proposed system earns better accuracy than the existing methods while requiring least number of features. The recognition rate using RF and K-NN classifier is found to be 94.6% and 96.4% respectively.


Author(s):  
Carlos Domenick Morales-Molina ◽  
Diego Santamaria-Guerrero ◽  
Gabriel Sanchez-Perez ◽  
Hector Perez-Meana ◽  
Aldo Hernandez-Suarez

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Elisabeth Sartoretti ◽  
Thomas Sartoretti ◽  
Michael Wyss ◽  
Carolin Reischauer ◽  
Luuk van Smoorenburg ◽  
...  

AbstractWe sought to evaluate the utility of radiomics for Amide Proton Transfer weighted (APTw) imaging by assessing its value in differentiating brain metastases from high- and low grade glial brain tumors. We retrospectively identified 48 treatment-naïve patients (10 WHO grade 2, 1 WHO grade 3, 10 WHO grade 4 primary glial brain tumors and 27 metastases) with either primary glial brain tumors or metastases who had undergone APTw MR imaging. After image analysis with radiomics feature extraction and post-processing, machine learning algorithms (multilayer perceptron machine learning algorithm; random forest classifier) with stratified tenfold cross validation were trained on features and were used to differentiate the brain neoplasms. The multilayer perceptron achieved an AUC of 0.836 (receiver operating characteristic curve) in differentiating primary glial brain tumors from metastases. The random forest classifier achieved an AUC of 0.868 in differentiating WHO grade 4 from WHO grade 2/3 primary glial brain tumors. For the differentiation of WHO grade 4 tumors from grade 2/3 tumors and metastases an average AUC of 0.797 was achieved. Our results indicate that the use of radiomics for APTw imaging is feasible and the differentiation of primary glial brain tumors from metastases is achievable with a high degree of accuracy.


Author(s):  
K. J. Paprottka ◽  
S. Kleiner ◽  
C. Preibisch ◽  
F. Kofler ◽  
F. Schmidt-Graf ◽  
...  

Abstract Purpose To evaluate diagnostic accuracy of fully automated analysis of multimodal imaging data using [18F]-FET-PET and MRI (including amide proton transfer-weighted (APTw) imaging and dynamic-susceptibility-contrast (DSC) perfusion) in differentiation of tumor progression from treatment-related changes in patients with glioma. Material and methods At suspected tumor progression, MRI and [18F]-FET-PET data as part of a retrospective analysis of an observational cohort of 66 patients/74 scans (51 glioblastoma and 23 lower-grade-glioma, 8 patients included at two different time points) were automatically segmented into necrosis, FLAIR-hyperintense, and contrast-enhancing areas using an ensemble of deep learning algorithms. In parallel, previous MR exam was processed in a similar way to subtract preexisting tumor areas and focus on progressive tumor only. Within these progressive areas, intensity statistics were automatically extracted from [18F]-FET-PET, APTw, and DSC-derived cerebral-blood-volume (CBV) maps and used to train a Random Forest classifier with threefold cross-validation. To evaluate contribution of the imaging modalities to the classifier’s performance, impurity-based importance measures were collected. Classifier performance was compared with radiology reports and interdisciplinary tumor board assessments. Results In 57/74 cases (77%), tumor progression was confirmed histopathologically (39 cases) or via follow-up imaging (18 cases), while remaining 17 cases were diagnosed as treatment-related changes. The classification accuracy of the Random Forest classifier was 0.86, 95% CI 0.77–0.93 (sensitivity 0.91, 95% CI 0.81–0.97; specificity 0.71, 95% CI 0.44–0.9), significantly above the no-information rate of 0.77 (p = 0.03), and higher compared to an accuracy of 0.82 for MRI (95% CI 0.72–0.9), 0.81 for [18F]-FET-PET (95% CI 0.7–0.89), and 0.81 for expert consensus (95% CI 0.7–0.89), although these differences were not statistically significant (p > 0.1 for all comparisons, McNemar test). [18F]-FET-PET hot-spot volume was single-most important variable, with relevant contribution from all imaging modalities. Conclusion Automated, joint image analysis of [18F]-FET-PET and advanced MR imaging techniques APTw and DSC perfusion is a promising tool for objective response assessment in gliomas.


Sign in / Sign up

Export Citation Format

Share Document