scholarly journals Automatic Classification of Cotton Root Rot Disease Based on UAV Remote Sensing

2020 ◽  
Vol 12 (8) ◽  
pp. 1310 ◽  
Author(s):  
Tianyi Wang ◽  
J. Alex Thomasson ◽  
Chenghai Yang ◽  
Thomas Isakeit ◽  
Robert L. Nichols

Cotton root rot (CRR) is a persistent soilborne fungal disease that is devastating to cotton in the southwestern U.S. and Mexico. Research has shown that CRR can be prevented or at least mitigated by applying a fungicide at planting, but the fungicide should be applied precisely to minimize the quantity of product used and the treatment cost. The CRR-infested areas within a field are consistent from year to year, so it is possible to apply the fungicide only at locations where CRR is manifest, thus minimizing the amount of fungicide applied across the field. Previous studies have shown that remote sensing (RS) from manned aircraft is an effective means of delineating CRR-infested field areas. Applying various classification methods to moderate-resolution (1.0 m/pixel) RS images has recently become the conventional way to delineate CRR-infested areas. In this research, an unmanned aerial vehicle (UAV) was used to collect high-resolution remote sensing (RS) images in three Texas fields known to be infested with CRR. Supervised, unsupervised, and combined unsupervised classification methods were evaluated for differentiating CRR from healthy zones of cotton plants. Two new automated classification methods that take advantage of the high resolution inherent in UAV RS images were also evaluated. The results indicated that the new automated methods were up to 8.89% better than conventional classification methods in overall accuracy. One of these new methods, an automated method combining k-means segmentation and morphological opening and closing, provided the best results, with overall accuracy of 88.5% and the lowest errors of omission (11.44%) and commission (16.13%) of all methods considered.

2020 ◽  
Vol 12 (15) ◽  
pp. 2453
Author(s):  
Tianyi Wang ◽  
J. Alex Thomasson ◽  
Thomas Isakeit ◽  
Chenghai Yang ◽  
Robert L. Nichols

Cotton root rot (CRR), caused by the fungus Phymatotrichopsis omnivora, is a destructive cotton disease that mainly affects the crop in Texas. Flutriafol fungicide applied at or soon after planting has been proven effective at protecting cotton plants from being infected by CRR. Previous research has indicated that CRR will reoccur in the same regions of a field as in past years. CRR-infected plants can be detected with aerial remote sensing (RS). As unmanned aerial vehicles (UAVs) have been introduced into agricultural RS, the spatial resolution of farm images has increased significantly, making plant-by-plant (PBP) CRR classification possible. An unsupervised classification algorithm, PBP, based on the Superpixel concept, was developed to delineate CRR-infested areas at roughly the single-plant level. Five-band multispectral data were collected with a UAV to test these methods. The results indicated that the single-plant level classification achieved overall accuracy as high as 95.94%. Compared to regional classifications, PBP classification performed better in overall accuracy, kappa coefficient, errors of commission, and errors of omission. The single-plant fungicide application was also effective in preventing CRR.


2020 ◽  
Vol 12 (3) ◽  
pp. 759
Author(s):  
Jūratė Sužiedelytė Visockienė ◽  
Eglė Tumelienė ◽  
Vida Maliene

H. sosnowskyi (Heracleum sosnowskyi) is a plant that is widespread both in Lithuania and other countries and causes abundant problems. The damage caused by the population of the plant is many-sided: it menaces the biodiversity of the land, poses risk to human health, and causes considerable economic losses. In order to find effective and complex measures against this invasive plant, it is very important to identify places and areas where H. sosnowskyi grows, carry out a detailed analysis, and monitor its spread to avoid leaving this process to chance. In this paper, the remote sensing methodology was proposed to identify territories covered with H. sosnowskyi plants (land classification). Two categories of land cover classification were used: supervised (human-guided) and unsupervised (calculated by software). In the application of the supervised method, the average wavelength of the spectrum of H. sosnowskyi was calculated for the classification of the RGB image and according to this, the unsupervised classification by the program was accomplished. The combination of both classification methods, performed in steps, allowed obtaining better results than using one. The application of authors’ proposed methodology was demonstrated in a Lithuanian case study discussed in this paper.


2020 ◽  
Vol 14 (03) ◽  
Author(s):  
Tianyi Wang ◽  
J. Alex Thomasson ◽  
Chenghai Yang ◽  
Thomas Isakeit ◽  
Robert L. Nichols ◽  
...  

2019 ◽  
Author(s):  
Tianyi Wang ◽  
John Alex Thomasson ◽  
Chenghai Yang ◽  
Thomas Isakeit

2015 ◽  
Vol 9 (1) ◽  
pp. 096013 ◽  
Author(s):  
Huaibo Song ◽  
Chenghai Yang ◽  
Jian Zhang ◽  
Dongjian He ◽  
John Alex Thomasson

2019 ◽  
Vol 11 (1) ◽  
pp. 69 ◽  
Author(s):  
Zachary L. Langford ◽  
Jitendra Kumar ◽  
Forrest M. Hoffman ◽  
Amy L. Breen ◽  
Colleen M. Iversen

Land cover datasets are essential for modeling and analysis of Arctic ecosystem structure and function and for understanding land–atmosphere interactions at high spatial resolutions. However, most Arctic land cover products are generated at a coarse resolution, often limited due to cloud cover, polar darkness, and poor availability of high-resolution imagery. A multi-sensor remote sensing-based deep learning approach was developed for generating high-resolution (5 m) vegetation maps for the western Alaskan Arctic on the Seward Peninsula, Alaska. The fusion of hyperspectral, multispectral, and terrain datasets was performed using unsupervised and supervised classification techniques over a ∼343 km2 area, and a high-resolution (5 m) vegetation classification map was generated. An unsupervised technique was developed to classify high-dimensional remote sensing datasets into cohesive clusters. We employed a quantitative method to add supervision to the unlabeled clusters, producing a fully labeled vegetation map. We then developed convolutional neural networks (CNNs) using the multi-sensor fusion datasets to map vegetation distributions using the original classes and the classes produced by the unsupervised classification method. To validate the resulting CNN maps, vegetation observations were collected at 30 field plots during the summer of 2016, and the resulting vegetation products developed were evaluated against them for accuracy. Our analysis indicates the CNN models based on the labels produced by the unsupervised classification method provided the most accurate mapping of vegetation types, increasing the validation score (i.e., precision) from 0.53 to 0.83 when evaluated against field vegetation observations.


Sign in / Sign up

Export Citation Format

Share Document