scholarly journals Estimating Maize-Leaf Coverage in Field Conditions by Applying a Machine Learning Algorithm to UAV Remote Sensing Images

2019 ◽  
Vol 9 (11) ◽  
pp. 2389 ◽  
Author(s):  
Chengquan Zhou ◽  
Hongbao Ye ◽  
Zhifu Xu ◽  
Jun Hu ◽  
Xiaoyan Shi ◽  
...  

Leaf coverage is an indicator of plant growth rate and predicted yield, and thus it is crucial to plant-breeding research. Robust image segmentation of leaf coverage from remote-sensing images acquired by unmanned aerial vehicles (UAVs) in varying environments can be directly used for large-scale coverage estimation, and is a key component of high-throughput field phenotyping. We thus propose an image-segmentation method based on machine learning to extract relatively accurate coverage information from the orthophoto generated after preprocessing. The image analysis pipeline, including dataset augmenting, removing background, classifier training and noise reduction, generates a set of binary masks to obtain leaf coverage from the image. We compare the proposed method with three conventional methods (Hue-Saturation-Value, edge-detection-based algorithm, random forest) and a frontier deep-learning method called DeepLabv3+. The proposed method improves indicators such as Qseg, Sr, Es and mIOU by 15% to 30%. The experimental results show that this approach is less limited by radiation conditions, and that the protocol can easily be implemented for extensive sampling at low cost. As a result, with the proposed method, we recommend using red-green-blue (RGB)-based technology in addition to conventional equipment for acquiring the leaf coverage of agricultural crops.

2021 ◽  
Vol 13 (8) ◽  
pp. 1541
Author(s):  
Marco Piragnolo ◽  
Francesco Pirotti ◽  
Carlo Zanrosso ◽  
Emanuele Lingua ◽  
Stefano Grigolato

This paper reports a semi-automated workflow for detection and quantification of forest damage from windthrow in an Alpine region, in particular from the Vaia storm in October 2018. A web-GIS platform allows to select the damaged area by drawing polygons; several vegetation indices (VIs) are automatically calculated using remote sensing data (Sentinel-2A) and tested to identify the more suitable ones for quantifying forest damage using cross-validation with ground-truth data. Results show that the mean value of NDVI and NDMI decreased in the damaged areas, and have a strong negative correlation with severity. RGI has an opposite behavior in contrast with NDVI and NDMI, as it highlights the red component of the land surface. In all cases, variance of the VI increases after the event between 0.03 and 0.15. Understorey not damaged from the windthrow, if consisting of 40% or more of the total cover in the area, undermines significantly the sensibility of the VIs to detecting and predicting severity. Using aggregational statistics (average and standard deviation) of VIs over polygons as input to a machine learning algorithm, i.e., Random Forest, results in severity prediction with regression reaching a root mean square error (RMSE) of 9.96, on a severity scale of 0–100, using an ensemble of area averages and standard deviations of NDVI, NDMI, and RGI indices. The results show that combining more than one VI can significantly improve the estimation of severity, and web-GIS tools can support decisions with selected VIs. The reported results prove that Sentinel-2 imagery can be deployed and analysed via web-tools to estimate forest damage severity and that VIs can be used via machine learning for predicting severity of damage, with careful evaluation of the effect of understorey in each situation.


2019 ◽  
Vol 11 (12) ◽  
pp. 1500 ◽  
Author(s):  
Ning Yang ◽  
Diyou Liu ◽  
Quanlong Feng ◽  
Quan Xiong ◽  
Lin Zhang ◽  
...  

Large-scale crop mapping provides important information in agricultural applications. However, it is a challenging task due to the inconsistent availability of remote sensing data caused by the irregular time series and limited coverage of the images, together with the low spatial resolution of the classification results. In this study, we proposed a new efficient method based on grids to address the inconsistent availability of the high-medium resolution images for large-scale crop classification. First, we proposed a method to block the remote sensing data into grids to solve the problem of temporal inconsistency. Then, a parallel computing technique was introduced to improve the calculation efficiency on the grid scale. Experiments were designed to evaluate the applicability of this method for different high-medium spatial resolution remote sensing images and different machine learning algorithms and to compare the results with the widely used nonparallel method. The computational experiments showed that the proposed method was successful at identifying large-scale crop distribution using common high-medium resolution remote sensing images (GF-1 WFV images and Sentinel-2) and common machine learning classifiers (the random forest algorithm and support vector machine). Finally, we mapped the croplands in Heilongjiang Province in 2015, 2016, 2017, which used a random forest classifier with the time series GF-1 WFV images spectral features, the enhanced vegetation index (EVI) and normalized difference water index (NDWI). Ultimately, the accuracy was assessed using a confusion matrix. The results showed that the classification accuracy reached 88%, 82%, and 85% in 2015, 2016, and 2017, respectively. In addition, with the help of parallel computing, the calculation speed was significantly improved by at least seven-fold. This indicates that using the grid framework to block the data for classification is feasible for crop mapping in large areas and has great application potential in the future.


Author(s):  
Ye Lv ◽  
Guofeng Wang ◽  
Xiangyun Hu

At present, remote sensing technology is the best weapon to get information from the earth surface, and it is very useful in geo- information updating and related applications. Extracting road from remote sensing images is one of the biggest demand of rapid city development, therefore, it becomes a hot issue. Roads in high-resolution images are more complex, patterns of roads vary a lot, which becomes obstacles for road extraction. In this paper, a machine learning based strategy is presented. The strategy overall uses the geometry features, radiation features, topology features and texture features. In high resolution remote sensing images, the images cover a great scale of landscape, thus, the speed of extracting roads is slow. So, roads’ ROIs are firstly detected by using Houghline detection and buffering method to narrow down the detecting area. As roads in high resolution images are normally in ribbon shape, mean-shift and watershed segmentation methods are used to extract road segments. Then, Real Adaboost supervised machine learning algorithm is used to pick out segments that contain roads’ pattern. At last, geometric shape analysis and morphology methods are used to prune and restore the whole roads’ area and to detect the centerline of roads.


2018 ◽  
Vol 4 (4) ◽  
pp. 7
Author(s):  
Rakesh Tripathi ◽  
Neelesh Gupta

Information extraction is a very challenging task because remote sensing images are very complicated and can be influenced by many factors. The information we can derive from a remote sensing image mostly depends on the image segmentation results. Image segmentation is an important processing step in most image, video and computer vision applications. Extensive research has been done in creating many different approaches and algorithms for image segmentation. Labeling different parts of the image has been a challenging aspect of image processing. Segmentation is considered as one of the main steps in image processing. It divides a digital image into multiple regions in order to analyze them. It is also used to distinguish different objects in the image. Several image segmentation techniques have been developed by the researchers in order to make images smooth and easy to evaluate. Various algorithms for automating the segmentation process have been proposed, tested and evaluated to find the most ideal algorithm to be used for different types of images. In this paper a review of basic image segmentation techniques of satellite images is presented.


Author(s):  
Ye Lv ◽  
Guofeng Wang ◽  
Xiangyun Hu

At present, remote sensing technology is the best weapon to get information from the earth surface, and it is very useful in geo- information updating and related applications. Extracting road from remote sensing images is one of the biggest demand of rapid city development, therefore, it becomes a hot issue. Roads in high-resolution images are more complex, patterns of roads vary a lot, which becomes obstacles for road extraction. In this paper, a machine learning based strategy is presented. The strategy overall uses the geometry features, radiation features, topology features and texture features. In high resolution remote sensing images, the images cover a great scale of landscape, thus, the speed of extracting roads is slow. So, roads’ ROIs are firstly detected by using Houghline detection and buffering method to narrow down the detecting area. As roads in high resolution images are normally in ribbon shape, mean-shift and watershed segmentation methods are used to extract road segments. Then, Real Adaboost supervised machine learning algorithm is used to pick out segments that contain roads’ pattern. At last, geometric shape analysis and morphology methods are used to prune and restore the whole roads’ area and to detect the centerline of roads.


Author(s):  
Y. Lin ◽  
T. Zhang ◽  
K. Qian ◽  
G. Xie ◽  
J. Cai

Abstract. The automatic classification technology of remote sensing images is the key technology to extract the rich geo-information in remote sensing images and to monitor the dynamic changes of land use and ecological environment. Remote sensing images have the characteristics of large amount of information and many dimensions. Therefore, how to classify and extract the information in remote sensing images has become a crucial issue in the field of remote sensing science. With the development of neural network theory, many scholars have carried out research on the application of neural network models in remote sensing image classification. However, there are still some problems to be solved in artificial neural network methods. In this study, considering the problem of large-scale land classification for medium resolution and multi-spectral remote sensing imagery, an improved machine learning algorithm based on extreme learning machine for remote sensing classification has been developed via regularization theory. The improved algorithm is more suitable for the application of post-classification change monitoring of features in large scale imaging. In this study, our main job is to evaluate the performance of ELM with A-optimal design regularization (here we call it simply as A-optimal RELM). So the accuracy and efficiency of A-optimal RELM algorithm for remote sensing imagery classification, as well as the algorithms of support vector machine (SVM) and original ELM are compared in the experiments. The experimental results show that A-optimal RELM performs the best on all three different images with overall accuracy of 97.27% and 95.03% respectively. Besides, the A-optimal RELM performs better on the details of distinguish similar and confusing terrestrial object pixels.


2021 ◽  
Vol 13 (20) ◽  
pp. 4128
Author(s):  
Jinwen Xu ◽  
Yi Qiang

Quantitative assessment of community resilience is a challenge due to the lack of empirical data about human dynamics in disasters. To fill the data gap, this study explores the utility of nighttime lights (NTL) remote sensing images in assessing community recovery and resilience in natural disasters. Specifically, this study utilized the newly-released NASA moonlight-adjusted SNPP-VIIRS daily images to analyze spatiotemporal changes of NTL radiance in Hurricane Sandy (2012). Based on the conceptual framework of recovery trajectory, NTL disturbance and recovery during the hurricane were calculated at different spatial units and analyzed using spatial analysis tools. Regression analysis was applied to explore relations between the observed NTL changes and explanatory variables, such as wind speed, housing damage, land cover, and Twitter keywords. The result indicates potential factors of NTL changes and urban-rural disparities of disaster impacts and recovery. This study shows that NTL remote sensing images are a low-cost instrument to collect near-real-time, large-scale, and high-resolution human dynamics data in disasters, which provide a novel insight into community recovery and resilience. The uncovered spatial disparities of community recovery help improve disaster awareness and preparation of local communities and promote resilience against future disasters. The systematical documentation of the analysis workflow provides a reference for future research in the application of SNPP-VIIRS daily images.


Author(s):  
Xiaochuan Tang ◽  
Mingzhe Liu ◽  
Hao Zhong ◽  
Yuanzhen Ju ◽  
Weile Li ◽  
...  

Landslide recognition is widely used in natural disaster risk management. Traditional landslide recognition is mainly conducted by geologists, which is accurate but inefficient. This article introduces multiple instance learning (MIL) to perform automatic landslide recognition. An end-to-end deep convolutional neural network is proposed, referred to as Multiple Instance Learning–based Landslide classification (MILL). First, MILL uses a large-scale remote sensing image classification dataset to build pre-train networks for landslide feature extraction. Second, MILL extracts instances and assign instance labels without pixel-level annotations. Third, MILL uses a new channel attention–based MIL pooling function to map instance-level labels to bag-level label. We apply MIL to detect landslides in a loess area. Experimental results demonstrate that MILL is effective in identifying landslides in remote sensing images.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Peter M. Maloca ◽  
Philipp L. Müller ◽  
Aaron Y. Lee ◽  
Adnan Tufail ◽  
Konstantinos Balaskas ◽  
...  

AbstractMachine learning has greatly facilitated the analysis of medical data, while the internal operations usually remain intransparent. To better comprehend these opaque procedures, a convolutional neural network for optical coherence tomography image segmentation was enhanced with a Traceable Relevance Explainability (T-REX) technique. The proposed application was based on three components: ground truth generation by multiple graders, calculation of Hamming distances among graders and the machine learning algorithm, as well as a smart data visualization (‘neural recording’). An overall average variability of 1.75% between the human graders and the algorithm was found, slightly minor to 2.02% among human graders. The ambiguity in ground truth had noteworthy impact on machine learning results, which could be visualized. The convolutional neural network balanced between graders and allowed for modifiable predictions dependent on the compartment. Using the proposed T-REX setup, machine learning processes could be rendered more transparent and understandable, possibly leading to optimized applications.


Sign in / Sign up

Export Citation Format

Share Document