Discrimination of Common Ragweed (Ambrosia artemisiifolia) and Mugwort (Artemisia vulgaris) Based on Bag of Visual Words Model

2017 ◽  
Vol 31 (2) ◽  
pp. 310-319 ◽  
Author(s):  
Anton Ustyuzhanin ◽  
Karl-Heinz Dammer ◽  
Antje Giebel ◽  
Cornelia Weltzien ◽  
Michael Schirrmann

Common ragweed is a plant species causing allergic and asthmatic symptoms in humans. To control its propagation, an early identification system is needed. However, due to its similar appearance with mugwort, proper differentiation between these two weed species is important. Therefore, we propose a method to discriminate common ragweed and mugwort leaves based on digital images using bag of visual words (BoVW). BoVW is an object-based image classification that has gained acceptance in many areas of science. We compared speeded-up robust features (SURF) and grid sampling for keypoint selection. The image vocabulary was built using K-means clustering. The image classifier was trained using support vector machines. To check the robustness of the classifier, specific model runs were conducted with and without damaged leaves in the trainings dataset. The results showed that the BoVW model allows the discrimination between common ragweed and mugwort leaves with high accuracy. Based on SURF keypoints with 50% of 788 images in total as training data, we achieved a 100% correct recognition of the two plant species. The grid sampling resulted in slightly less recognition accuracy (98 to 99%). In addition, the classification based on SURF was up to 31 times faster.

Technologies ◽  
2019 ◽  
Vol 7 (1) ◽  
pp. 20 ◽  
Author(s):  
Evaggelos Spyrou ◽  
Rozalia Nikopoulou ◽  
Ioannis Vernikos ◽  
Phivos Mylonas

It is noteworthy nowadays that monitoring and understanding a human’s emotional state plays a key role in the current and forthcoming computational technologies. On the other hand, this monitoring and analysis should be as unobtrusive as possible, since in our era the digital world has been smoothly adopted in everyday life activities. In this framework and within the domain of assessing humans’ affective state during their educational training, the most popular way to go is to use sensory equipment that would allow their observing without involving any kind of direct contact. Thus, in this work, we focus on human emotion recognition from audio stimuli (i.e., human speech) using a novel approach based on a computer vision inspired methodology, namely the bag-of-visual words method, applied on several audio segment spectrograms. The latter are considered to be the visual representation of the considered audio segment and may be analyzed by exploiting well-known traditional computer vision techniques, such as construction of a visual vocabulary, extraction of speeded-up robust features (SURF) features, quantization into a set of visual words, and image histogram construction. As a last step, support vector machines (SVM) classifiers are trained based on the aforementioned information. Finally, to further generalize the herein proposed approach, we utilize publicly available datasets from several human languages to perform cross-language experiments, both in terms of actor-created and real-life ones.


Electricalsubstation online monitoring in computer vision technology is based on image processingalgorithm to perform visual analysis.This paperpresents classification of ceramicand glass insulators through Bag of Visual Words and detection of these insulators by Point Feature Matching.The training image datasets are used for categorization by forming a visual vocabularywhile a new unlabeled image from testing image dataset is classify using nearest neighbor classification method for features descriptor. For detection we use Speeded up Robust Features for detecting position of insulator present in cluttered scene image. Matching process is done between test and reference image and decision is made based on similar features. Weconducted experiment on insulators to verify the superiority of our proposed method.The proposed method can be used in security, surveillance and inspection system.


Weed Science ◽  
1977 ◽  
Vol 25 (5) ◽  
pp. 390-395 ◽  
Author(s):  
W.G. Steinert ◽  
J.F. Stritzke

Differences in the phytotoxicity of tebuthiuron (N-[5-(1,1-dimethylethyl)-1,3,4-thiadiazol-2-yl]-N,N′-dimehtylurea) to nine plant species were observed on the basis of calculated GR50values. Japanese brome (Bromus japonicusThunb.) with a GR50value of 0.016 ppmw was the most susceptible and corn (Zea maysL. ‘Gold Rush’) with a GR50value of 0.436 ppmw the least susceptible. There was some growth suppression with foliar application but primary activity on all species was attributed to root uptake. The most significant translocation of labeled tebuthiuron was to the tops of common ragweed (Ambrosia artemisiifoliaL.) plants treated through the nutrient solution where 24.5% of the total amount recovered was detected after 24 h. Only 7.3% of the total amount recovered was detected in the top of rye (Secale cerealeL. ‘Elbon’) plants with the same treatment. With both species, more than 90% of the radioactivity recovered following foliar treatments was still in the treated leaf after 24 h. Less than 5.5% of the recovered activity for both species was in the tops, less than 3% in the roots, and less than 1.5% was in the nutrient solution.


2018 ◽  
Vol 10 (10) ◽  
pp. 1530 ◽  
Author(s):  
Michael Pflanz ◽  
Henning Nordmeyer ◽  
Michael Schirrmann

Weed detection with aerial images is a great challenge to generate field maps for site-specific plant protection application. The requirements might be met with low altitude flights of unmanned aerial vehicles (UAV), to provide adequate ground resolutions for differentiating even single weeds accurately. The following study proposed and tested an image classifier based on a Bag of Visual Words (BoVW) framework for mapping weed species, using a small unmanned aircraft system (UAS) with a commercial camera on board, at low flying altitudes. The image classifier was trained with support vector machines after building a visual dictionary of local features from many collected UAS images. A window-based processing of the models was used for mapping the weed occurrences in the UAS imagery. The UAS flight campaign was carried out over a weed infested wheat field, and images were acquired between a 1 and 6 m flight altitude. From the UAS images, 25,452 weed plants were annotated on species level, along with wheat and soil as background classes for training and validation of the models. The results showed that the BoVW model allowed the discrimination of single plants with high accuracy for Matricaria recutita L. (88.60%), Papaver rhoeas L. (89.08%), Viola arvensis M. (87.93%), and winter wheat (94.09%), within the generated maps. Regarding site specific weed control, the classified UAS images would enable the selection of the right herbicide based on the distribution of the predicted weed species.


Author(s):  
Yuanyuan Zuo ◽  
Bo Zhang

The sparse representation based classification algorithm has been used to solve the problem of human face recognition, but the image database is restricted to human frontal faces with only slight illumination and expression changes. This paper applies the sparse representation based algorithm to the problem of generic image classification, with a certain degree of intra-class variations and background clutter. Experiments are conducted with the sparse representation based algorithm and Support Vector Machine (SVM) classifiers on 25 object categories selected from the Caltech101 dataset. Experimental results show that without the time-consuming parameter optimization, the sparse representation based algorithm achieves comparable performance with SVM. The experiments also demonstrate that the algorithm is robust to a certain degree of background clutter and intra-class variations with the bag-of-visual-words representations. The sparse representation based algorithm can also be applied to generic image classification task when the appropriate image feature is used.


2018 ◽  
pp. 1381-1390
Author(s):  
Vandana M. Ladwani

Support Vector Machines is one of the powerful Machine learning algorithms used for numerous applications. Support Vector Machines generate decision boundary between two classes which is characterized by special subset of the training data called as Support Vectors. The advantage of support vector machine over perceptron is that it generates a unique decision boundary with maximum margin. Kernalized version makes it very faster to learn as the data transformation is implicit. Object recognition using multiclass SVM is discussed in the chapter. The experiment uses histogram of visual words and multiclass SVM for image classification.


Author(s):  
Vandana M. Ladwani

Support Vector Machines is one of the powerful Machine learning algorithms used for numerous applications. Support Vector Machines generate decision boundary between two classes which is characterized by special subset of the training data called as Support Vectors. The advantage of support vector machine over perceptron is that it generates a unique decision boundary with maximum margin. Kernalized version makes it very faster to learn as the data transformation is implicit. Object recognition using multiclass SVM is discussed in the chapter. The experiment uses histogram of visual words and multiclass SVM for image classification.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Huadong Sun ◽  
Xu Zhang ◽  
Xiaowei Han ◽  
Xuesong Jin ◽  
Zhijie Zhao

With the increasing scale of e-commerce, the complexity of image content makes commodity image classification face great challenges. Image feature extraction often determines the quality of the final classification results. At present, the image feature extraction part mainly includes the underlying visual feature and the intermediate semantic feature. The intermediate semantics of the image acts as a bridge between the underlying features and the advanced semantics of the image, which can make up for the semantic gap to a certain extent and has strong robustness. As a typical intermediate semantic representation method, the bag-of-visual-words (BoVW) model has received extensive attention in image classification. However, the traditional BoVW model loses the location information of local features, and its local feature descriptors mainly focus on the texture shape information of local regions but lack the expression of color information. Therefore, in this paper, the improved bag-of-visual-words model is presented, which contains three aspects of improvement: (1) multiscale local region extraction; (2) local feature description by speeded up robust features (SURF) and color vector angle histogram (CVAH); and (3) diagonal concentric rectangular pattern. Experimental results show that the three aspects of improvement to the BoVW model are complementary, while compared with the traditional BoVW and the BoVW adopting SURF + SPM, the classification accuracy of the improved BoVW is increased by 3.60% and 2.33%, respectively.


Sign in / Sign up

Export Citation Format

Share Document