scholarly journals Recognition of Ziziphus lotus through Aerial Imaging and Deep Transfer Learning Approach

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Ahsan Bin Tufail ◽  
Inam Ullah ◽  
Rahim Khan ◽  
Luqman Ali ◽  
Adnan Yousaf ◽  
...  

There is a growing demand for the detection of endangered plant species through machine learning approaches. Ziziphus lotus is an endangered deciduous plant species in the buckthorn family (Rhamnaceae) native to Southern Europe. Traditional methods such as object-based image analysis have achieved good recognition rates. However, they are slow and require high human intervention. Transfer learning-based methods have several applications for data analysis in a variety of Internet of Things systems. In this work, we have analyzed the potential of convolutional neural networks to recognize and detect the Ziziphus lotus plant in remote sensing images. We fine-tuned Inception version 3, Xception, and Inception ResNet version 2 architectures for binary classification into plant species class and bare soil and vegetation class. The achieved results are promising and effectively demonstrate the better performance of deep learning algorithms over their counterparts.

2021 ◽  
Vol 193 (2) ◽  
Author(s):  
Jens Oldeland ◽  
Rasmus Revermann ◽  
Jona Luther-Mosebach ◽  
Tillmann Buttschardt ◽  
Jan R. K. Lehmann

AbstractPlant species that negatively affect their environment by encroachment require constant management and monitoring through field surveys. Drones have been suggested to support field surveyors allowing more accurate mapping with just-in-time aerial imagery. Furthermore, object-based image analysis tools could increase the accuracy of species maps. However, only few studies compare species distribution maps resulting from traditional field surveys and object-based image analysis using drone imagery. We acquired drone imagery for a saltmarsh area (18 ha) on the Hallig Nordstrandischmoor (Germany) with patches of Elymus athericus, a tall grass which encroaches higher parts of saltmarshes. A field survey was conducted afterwards using the drone orthoimagery as a baseline. We used object-based image analysis (OBIA) to segment CIR imagery into polygons which were classified into eight land cover classes. Finally, we compared polygons of the field-based and OBIA-based maps visually and for location, area, and overlap before and after post-processing. OBIA-based classification yielded good results (kappa = 0.937) and agreed in general with the field-based maps (field = 6.29 ha, drone = 6.22 ha with E. athericus dominance). Post-processing revealed 0.31 ha of misclassified polygons, which were often related to water runnels or shadows, leaving 5.91 ha of E. athericus cover. Overlap of both polygon maps was only 70% resulting from many small patches identified where E. athericus was absent. In sum, drones can greatly support field surveys in monitoring of plant species by allowing for accurate species maps and just-in-time captured very-high-resolution imagery.


2020 ◽  
Vol 12 (11) ◽  
pp. 1772
Author(s):  
Brian Alan Johnson ◽  
Lei Ma

Image segmentation and geographic object-based image analysis (GEOBIA) were proposed around the turn of the century as a means to analyze high-spatial-resolution remote sensing images. Since then, object-based approaches have been used to analyze a wide range of images for numerous applications. In this Editorial, we present some highlights of image segmentation and GEOBIA research from the last two years (2018–2019), including a Special Issue published in the journal Remote Sensing. As a final contribution of this special issue, we have shared the views of 45 other researchers (corresponding authors of published papers on GEOBIA in 2018–2019) on the current state and future priorities of this field, gathered through an online survey. Most researchers surveyed acknowledged that image segmentation/GEOBIA approaches have achieved a high level of maturity, although the need for more free user-friendly software and tools, further automation, better integration with new machine-learning approaches (including deep learning), and more suitable accuracy assessment methods was frequently pointed out.


2014 ◽  
Vol 84 ◽  
pp. 107-119 ◽  
Author(s):  
Markus Diesing ◽  
Sophie L. Green ◽  
David Stephens ◽  
R. Murray Lark ◽  
Heather A. Stewart ◽  
...  

Symmetry ◽  
2021 ◽  
Vol 13 (2) ◽  
pp. 356
Author(s):  
Shubham Mahajan ◽  
Akshay Raina ◽  
Xiao-Zhi Gao ◽  
Amit Kant Pandit

Plant species recognition from visual data has always been a challenging task for Artificial Intelligence (AI) researchers, due to a number of complications in the task, such as the enormous data to be processed due to vast number of floral species. There are many sources from a plant that can be used as feature aspects for an AI-based model, but features related to parts like leaves are considered as more significant for the task, primarily due to easy accessibility, than other parts like flowers, stems, etc. With this notion, we propose a plant species recognition model based on morphological features extracted from corresponding leaves’ images using the support vector machine (SVM) with adaptive boosting technique. This proposed framework includes the pre-processing, extraction of features and classification into one of the species. Various morphological features like centroid, major axis length, minor axis length, solidity, perimeter, and orientation are extracted from the digital images of various categories of leaves. In addition to this, transfer learning, as suggested by some previous studies, has also been used in the feature extraction process. Various classifiers like the kNN, decision trees, and multilayer perceptron (with and without AdaBoost) are employed on the opensource dataset, FLAVIA, to certify our study in its robustness, in contrast to other classifier frameworks. With this, our study also signifies the additional advantage of 10-fold cross validation over other dataset partitioning strategies, thereby achieving a precision rate of 95.85%.


2021 ◽  
Vol 13 (4) ◽  
pp. 830
Author(s):  
Adam R. Benjamin ◽  
Amr Abd-Elrahman ◽  
Lyn A. Gettys ◽  
Hartwig H. Hochmair ◽  
Kyle Thayer

This study investigates the use of unmanned aerial systems (UAS) mapping for monitoring the efficacy of invasive aquatic vegetation (AV) management on a floating-leaved AV species, Nymphoides cristata (CFH). The study site consists of 48 treatment plots (TPs). Based on six unique flights over two days at three different flight altitudes while using both a multispectral and RGB sensor, accuracy assessment of the final object-based image analysis (OBIA)-derived classified images yielded overall accuracies ranging from 89.6% to 95.4%. The multispectral sensor was significantly more accurate than the RGB sensor at measuring CFH areal coverage within each TP only with the highest multispectral, spatial resolution (2.7 cm/pix at 40 m altitude). When measuring response in the AV community area between the day of treatment and two weeks after treatment, there was no significant difference between the temporal area change from the reference datasets and the area changes derived from either the RGB or multispectral sensor. Thus, water resource managers need to weigh small gains in accuracy from using multispectral sensors against other operational considerations such as the additional processing time due to increased file sizes, higher financial costs for equipment procurements, and longer flight durations in the field when operating multispectral sensors.


Sign in / Sign up

Export Citation Format

Share Document