scholarly journals Weed Mapping with UAS Imagery and a Bag of Visual Words Based Image Classifier

2018 ◽  
Vol 10 (10) ◽  
pp. 1530 ◽  
Author(s):  
Michael Pflanz ◽  
Henning Nordmeyer ◽  
Michael Schirrmann

Weed detection with aerial images is a great challenge to generate field maps for site-specific plant protection application. The requirements might be met with low altitude flights of unmanned aerial vehicles (UAV), to provide adequate ground resolutions for differentiating even single weeds accurately. The following study proposed and tested an image classifier based on a Bag of Visual Words (BoVW) framework for mapping weed species, using a small unmanned aircraft system (UAS) with a commercial camera on board, at low flying altitudes. The image classifier was trained with support vector machines after building a visual dictionary of local features from many collected UAS images. A window-based processing of the models was used for mapping the weed occurrences in the UAS imagery. The UAS flight campaign was carried out over a weed infested wheat field, and images were acquired between a 1 and 6 m flight altitude. From the UAS images, 25,452 weed plants were annotated on species level, along with wheat and soil as background classes for training and validation of the models. The results showed that the BoVW model allowed the discrimination of single plants with high accuracy for Matricaria recutita L. (88.60%), Papaver rhoeas L. (89.08%), Viola arvensis M. (87.93%), and winter wheat (94.09%), within the generated maps. Regarding site specific weed control, the classified UAS images would enable the selection of the right herbicide based on the distribution of the predicted weed species.

2010 ◽  
Vol 7 (2) ◽  
pp. 366-370 ◽  
Author(s):  
Sheng Xu ◽  
Tao Fang ◽  
Deren Li ◽  
Shiwei Wang

2017 ◽  
Vol 31 (2) ◽  
pp. 310-319 ◽  
Author(s):  
Anton Ustyuzhanin ◽  
Karl-Heinz Dammer ◽  
Antje Giebel ◽  
Cornelia Weltzien ◽  
Michael Schirrmann

Common ragweed is a plant species causing allergic and asthmatic symptoms in humans. To control its propagation, an early identification system is needed. However, due to its similar appearance with mugwort, proper differentiation between these two weed species is important. Therefore, we propose a method to discriminate common ragweed and mugwort leaves based on digital images using bag of visual words (BoVW). BoVW is an object-based image classification that has gained acceptance in many areas of science. We compared speeded-up robust features (SURF) and grid sampling for keypoint selection. The image vocabulary was built using K-means clustering. The image classifier was trained using support vector machines. To check the robustness of the classifier, specific model runs were conducted with and without damaged leaves in the trainings dataset. The results showed that the BoVW model allows the discrimination between common ragweed and mugwort leaves with high accuracy. Based on SURF keypoints with 50% of 788 images in total as training data, we achieved a 100% correct recognition of the two plant species. The grid sampling resulted in slightly less recognition accuracy (98 to 99%). In addition, the classification based on SURF was up to 31 times faster.


Technologies ◽  
2019 ◽  
Vol 7 (1) ◽  
pp. 20 ◽  
Author(s):  
Evaggelos Spyrou ◽  
Rozalia Nikopoulou ◽  
Ioannis Vernikos ◽  
Phivos Mylonas

It is noteworthy nowadays that monitoring and understanding a human’s emotional state plays a key role in the current and forthcoming computational technologies. On the other hand, this monitoring and analysis should be as unobtrusive as possible, since in our era the digital world has been smoothly adopted in everyday life activities. In this framework and within the domain of assessing humans’ affective state during their educational training, the most popular way to go is to use sensory equipment that would allow their observing without involving any kind of direct contact. Thus, in this work, we focus on human emotion recognition from audio stimuli (i.e., human speech) using a novel approach based on a computer vision inspired methodology, namely the bag-of-visual words method, applied on several audio segment spectrograms. The latter are considered to be the visual representation of the considered audio segment and may be analyzed by exploiting well-known traditional computer vision techniques, such as construction of a visual vocabulary, extraction of speeded-up robust features (SURF) features, quantization into a set of visual words, and image histogram construction. As a last step, support vector machines (SVM) classifiers are trained based on the aforementioned information. Finally, to further generalize the herein proposed approach, we utilize publicly available datasets from several human languages to perform cross-language experiments, both in terms of actor-created and real-life ones.


Author(s):  
Yuanyuan Zuo ◽  
Bo Zhang

The sparse representation based classification algorithm has been used to solve the problem of human face recognition, but the image database is restricted to human frontal faces with only slight illumination and expression changes. This paper applies the sparse representation based algorithm to the problem of generic image classification, with a certain degree of intra-class variations and background clutter. Experiments are conducted with the sparse representation based algorithm and Support Vector Machine (SVM) classifiers on 25 object categories selected from the Caltech101 dataset. Experimental results show that without the time-consuming parameter optimization, the sparse representation based algorithm achieves comparable performance with SVM. The experiments also demonstrate that the algorithm is robust to a certain degree of background clutter and intra-class variations with the bag-of-visual-words representations. The sparse representation based algorithm can also be applied to generic image classification task when the appropriate image feature is used.


Sensors ◽  
2019 ◽  
Vol 19 (12) ◽  
pp. 2790 ◽  
Author(s):  
Saima Nazir ◽  
Muhammad Haroon Yousaf ◽  
Jean-Christophe Nebel ◽  
Sergio A. Velastin

Human action recognition (HAR) has emerged as a core research domain for video understanding and analysis, thus attracting many researchers. Although significant results have been achieved in simple scenarios, HAR is still a challenging task due to issues associated with view independence, occlusion and inter-class variation observed in realistic scenarios. In previous research efforts, the classical bag of visual words approach along with its variations has been widely used. In this paper, we propose a Dynamic Spatio-Temporal Bag of Expressions (D-STBoE) model for human action recognition without compromising the strengths of the classical bag of visual words approach. Expressions are formed based on the density of a spatio-temporal cube of a visual word. To handle inter-class variation, we use class-specific visual word representation for visual expression generation. In contrast to the Bag of Expressions (BoE) model, the formation of visual expressions is based on the density of spatio-temporal cubes built around each visual word, as constructing neighborhoods with a fixed number of neighbors could include non-relevant information making a visual expression less discriminative in scenarios with occlusion and changing viewpoints. Thus, the proposed approach makes the model more robust to occlusion and changing viewpoint challenges present in realistic scenarios. Furthermore, we train a multi-class Support Vector Machine (SVM) for classifying bag of expressions into action classes. Comprehensive experiments on four publicly available datasets: KTH, UCF Sports, UCF11 and UCF50 show that the proposed model outperforms existing state-of-the-art human action recognition methods in term of accuracy to 99.21%, 98.60%, 96.94 and 94.10%, respectively.


Author(s):  
Nebojša Nikolić ◽  
Davide Rizzo ◽  
Elisa Marraccini ◽  
Alicia Ayerdi Gotor ◽  
Pietro Mattivi ◽  
...  

Highlights- Efficacy of UAVs and emergence predictive models for weed control has been confirmed. - Combination of time-specific and site-specific weed control provides optimal results.- Use of timely prescription maps can substantially reduce herbicide use.   Remote sensing using unmanned aerial vehicles (UAVs) for weed detection is a valuable asset in agriculture and is vastly used for site-specific weed control. Alongside site-specific methods, time-specific weed control is another critical aspect of precision weed control where, by using different models, it is possible to determine the time of weed species emergence. In this study, site-specific and time-specific weed control methods were combined to explore their collective benefits for precision weed control. Using the AlertInf model, which is a weed emergence prediction model, the cumulative emergence of Sorghum halepense was calculated, following the selection of the best date for UAV survey when the emergence was predicted to be at 96%. The survey was executed using a UAV with visible range sensors, resulting in an orthophoto with a resolution of 3 cm, allowing for good weed detection. The orthophoto was post-processed using two separate methods: an artificial neural network (ANN) and the visible atmospherically resistant index (VARI) to discriminate between the weeds, the crop and the soil. Finally, a model was applied for the creation of prescription maps with different cell sizes (0.25 m2, 2 m2, and 3 m2) and with three different decision-making thresholds based on pixels identified as weeds (>1%, >5%, and >10%). Additionally, the potential savings in herbicide use were assessed using two herbicides (Equip and Titus Mais Extra) as examples. The results show that both classification methods have a high overall accuracy of 98.6% for ANN and 98.1% for VARI, with the ANN having much better results concerning user/producer accuracy and Cohen's Kappa value (k=83.7 ANN and k=72 VARI). The reduction percentage of the area to be sprayed ranged from 65.29% to 93.35% using VARI and from 42.43% to 87.82% using ANN. The potential reduction in herbicide use was found to be dependent on the area. For the Equip herbicide, this reduction ranged from 1.32 L/ha to 0.28 L/ha for the ANN; with VARI the reduction in the amounts used ranged from 0.80 L/ha to 0.15 L/ha. Meanwhile, for Titus Mais Extra herbicide, the reduction ranged from 46.06 g/ha to 8.19 g/ha in amounts used with the ANN; with VARI the reduction in amounts used ranged from 27.77 g/ha to 5.32 g/ha. These preliminary results indicate that combining site-specific and time-specific weed control, has the potential to obtain a significant reduction in herbicide use with direct benefits for the environment and on-farm variable costs. Further field studies are needed for the validation of these results.


Sign in / Sign up

Export Citation Format

Share Document