scholarly journals On the uncertainty of stream networks derived from elevation data: the error propagation approach

2010 ◽  
Vol 7 (1) ◽  
pp. 767-799 ◽  
Author(s):  
T. Hengl ◽  
G. B. M. Heuvelink ◽  
E. E. van Loon

Abstract. DEM error propagation methodology is extended to the derivation of vector-based objects (stream networks) using geostatistical simulations. First, point sampled elevations are used to fit a variogram model. Next 100 DEM realizations are generated using conditional sequential Gaussian simulation; the stream network map is extracted for each of these realizations, and the collection of stream networks is analyzed to quantify the error propagation. At each grid cell, the probability of the occurrence of a stream and the propagated error are estimated. The method is illustrated using two small data sets: Baranja hill (30 m grid cell size; 16 512 pixels; 6367 sampled elevations), and Zlatibor (30 m grid cell size; 15 000 pixels; 2051 sampled elevations). All computations are run in the open source software for statistical computing R: package geoR is used to fit variogram; package gstat is used to run sequential Gaussian simulation; streams are extracted using the open source GIS SAGA via the RSAGA library. The resulting stream error map (Information entropy of a Bernoulli trial) clearly depicts areas where the extracted stream network is least precise – usually areas of low local relief, slightly concave. In both cases, significant parts of the study area (17.3% for Baranja Hill; 6.2% for Zlatibor) show high error (H>0.5) of locating streams. By correlating the propagated uncertainty of the derived stream network with various land surface parameters sampling of height measurements can be optimized so that delineated streams satisfy a required accuracy level. Remaining issue to be tackled is the computational burden of geostatistical simulations: this framework is at the moment limited to small to moderate data sets with several hundreds of points. Scripts and data sets used in this article are available on-line via the http://www.geomorphometry.org/ website and can be easily adopted/adjusted to any similar case study.

2010 ◽  
Vol 14 (7) ◽  
pp. 1153-1165 ◽  
Author(s):  
T. Hengl ◽  
G. B. M. Heuvelink ◽  
E. E. van Loon

Abstract. DEM error propagation methodology is extended to the derivation of vector-based objects (stream networks) using geostatistical simulations. First, point sampled elevations are used to fit a variogram model. Next 100 DEM realizations are generated using conditional sequential Gaussian simulation; the stream network map is extracted for each of these realizations, and the collection of stream networks is analyzed to quantify the error propagation. At each grid cell, the probability of the occurrence of a stream and the propagated error are estimated. The method is illustrated using two small data sets: Baranja hill (30 m grid cell size; 16 512 pixels; 6367 sampled elevations), and Zlatibor (30 m grid cell size; 15 000 pixels; 2051 sampled elevations). All computations are run in the open source software for statistical computing R: package geoR is used to fit variogram; package gstat is used to run sequential Gaussian simulation; streams are extracted using the open source GIS SAGA via the RSAGA library. The resulting stream error map (Information entropy of a Bernoulli trial) clearly depicts areas where the extracted stream network is least precise – usually areas of low local relief and slightly convex (0–10 difference from the mean value). In both cases, significant parts of the study area (17.3% for Baranja Hill; 6.2% for Zlatibor) show high error (H>0.5) of locating streams. By correlating the propagated uncertainty of the derived stream network with various land surface parameters sampling of height measurements can be optimized so that delineated streams satisfy the required accuracy level. Such error propagation tool should become a standard functionality in any modern GIS. Remaining issue to be tackled is the computational burden of geostatistical simulations: this framework is at the moment limited to small data sets with several hundreds of points. Scripts and data sets used in this article are available on-line via the www.geomorphometry.org website and can be easily adopted/adjusted to any similar case study.


Author(s):  
Jianping Ju ◽  
Hong Zheng ◽  
Xiaohang Xu ◽  
Zhongyuan Guo ◽  
Zhaohui Zheng ◽  
...  

AbstractAlthough convolutional neural networks have achieved success in the field of image classification, there are still challenges in the field of agricultural product quality sorting such as machine vision-based jujube defects detection. The performance of jujube defect detection mainly depends on the feature extraction and the classifier used. Due to the diversity of the jujube materials and the variability of the testing environment, the traditional method of manually extracting the features often fails to meet the requirements of practical application. In this paper, a jujube sorting model in small data sets based on convolutional neural network and transfer learning is proposed to meet the actual demand of jujube defects detection. Firstly, the original images collected from the actual jujube sorting production line were pre-processed, and the data were augmented to establish a data set of five categories of jujube defects. The original CNN model is then improved by embedding the SE module and using the triplet loss function and the center loss function to replace the softmax loss function. Finally, the depth pre-training model on the ImageNet image data set was used to conduct training on the jujube defects data set, so that the parameters of the pre-training model could fit the parameter distribution of the jujube defects image, and the parameter distribution was transferred to the jujube defects data set to complete the transfer of the model and realize the detection and classification of the jujube defects. The classification results are visualized by heatmap through the analysis of classification accuracy and confusion matrix compared with the comparison models. The experimental results show that the SE-ResNet50-CL model optimizes the fine-grained classification problem of jujube defect recognition, and the test accuracy reaches 94.15%. The model has good stability and high recognition accuracy in complex environments.


2020 ◽  
Vol 98 (Supplement_4) ◽  
pp. 8-9
Author(s):  
Zahra Karimi ◽  
Brian Sullivan ◽  
Mohsen Jafarikia

Abstract Previous studies have shown that the accuracy of Genomic Estimated Breeding Value (GEBV) as a predictor of future performance is higher than the traditional Estimated Breeding Value (EBV). The purpose of this study was to estimate the potential advantage of selection on GEBV for litter size (LS) compared to selection on EBV in the Canadian swine dam line breeds. The study included 236 Landrace and 210 Yorkshire gilts born in 2017 which had their first farrowing after 2017. GEBV and EBV for LS were calculated with data that was available at the end of 2017 (GEBV2017 and EBV2017, respectively). De-regressed EBV for LS in July 2019 (dEBV2019) was used as an adjusted phenotype. The average dEBV2019 for the top 40% of sows based on GEBV2017 was compared to the average dEBV2019 for the top 40% of sows based on EBV2017. The standard error of the estimated difference for each breed was estimated by comparing the average dEBV2019 for repeated random samples of two sets of 40% of the gilts. In comparison to the top 40% ranked based on EBV2017, ranking based on GEBV2017 resulted in an extra 0.45 (±0.29) and 0.37 (±0.25) piglets born per litter in Landrace and Yorkshire replacement gilts, respectively. The estimated Type I errors of the GEBV2017 gain over EBV2017 were 6% and 7% in Landrace and Yorkshire, respectively. Considering selection of both replacement boars and replacement gilts using GEBV instead of EBV can translate into increased annual genetic gain of 0.3 extra piglets per litter, which would more than double the rate of gain observed from typical EBV based selection. The permutation test for validation used in this study appears effective with relatively small data sets and could be applied to other traits, other species and other prediction methods.


Author(s):  
Jungeui Hong ◽  
Elizabeth A. Cudney ◽  
Genichi Taguchi ◽  
Rajesh Jugulum ◽  
Kioumars Paryani ◽  
...  

The Mahalanobis-Taguchi System is a diagnosis and predictive method for analyzing patterns in multivariate cases. The goal of this study is to compare the ability of the Mahalanobis-Taguchi System and a neural network to discriminate using small data sets. We examine the discriminant ability as a function of data set size using an application area where reliable data is publicly available. The study uses the Wisconsin Breast Cancer study with nine attributes and one class.


2021 ◽  
Vol 13 (4) ◽  
pp. 753 ◽  
Author(s):  
Francesco Mancini ◽  
Francesca Grassi ◽  
Nicola Cenni

This paper discusses a full interferometry processing chain based on dual-orbit Sentinel-1A and Sentinel-1B (S1) synthetic aperture radar data and a combination of open-source routines from the Sentinel Application Platform (SNAP), Stanford Method for Persistent Scatterers (StaMPS), and additional routines introduced by the authors. These are used to provide vertical and East-West horizontal velocity maps over a study area in the south-western sector of the Po Plain (Italy) where land subsidence is recognized. The processing of long time series of displacements from a cluster of continuous global navigation satellite system stations is used to provide a global reference frame for line-of-sight–projected velocities and to validate velocity maps after the decomposition analysis. We thus introduce the main theoretical aspects related to error propagation analysis for the proposed methodology and provide the level of uncertainty of the validation analysis at relevant points. The combined SNAP–StaMPS workflow is shown to be a reliable tool for S1 data processing. Based on the validation procedure, the workflow allows decomposed velocity maps to be obtained with an accuracy of 2 mm/yr with expected uncertainty levels lower than 2 mm/yr. Slant-oriented and decomposed velocity maps provide new insights into the ground deformation phenomena that affect the study area arising from a combination of natural and anthropogenic sources.


1996 ◽  
Vol 26 (8) ◽  
pp. 1416-1425 ◽  
Author(s):  
Pete Bettinger ◽  
Gay A. Bradshaw ◽  
George W. Weaver

The effects of geographic information system (GIS) data conversion on several polygon-and landscape-level indices were evaluated by using a GIS vegetation coverage from eastern Oregon, U.S.A. A vector–raster–vector conversion process was used to examine changes in GIS data. This process is widely used for data input (digital scanning of vector maps) and somewhat less widely used for data conversion (output of GIS data to specific formats). Most measures were sensitive to the grid cell size used in the conversion process. At the polygon level, using the conversion process with grid cell sizes of 3.05, 6.10, and 10 m produced relatively small changes to the original polygons in terms of ln(polygon area), ln(polygon perimeter), and 1/(fractal dimension). When grid cell size increased to 20 and 30 m, however, polygons were significantly different (p < 0.05) according to these polygon-level indices. At the landscape level, the number of polygons, polygon size coefficient of variation (CV), and edge density increased, while mean polygon size and an interspersion and juxtaposition index (IJI) decreased. The youngest and oldest age-class polygons followed the trends of overall landscape only in terms of number of polygons, mean polygon size, CV, and IJI. One major side effect of the conversion process was that many small polygons were produced in and around narrow areas of the original polygons. An alleviation process (referred to as the dissolving process) was used to dissolve the boundaries between similarly attributed polygons. When we used the dissolving process, the rate of change for landscape-level indices slowed; although the number of polygons and CV still increased with larger grid cell sizes, the increase was less than when the dissolving process was not used. Mean polygon size, edge density, and fractal dimension decreased after use of the dissolving process. Trends for the youngest and oldest age-class polygons were similar to those for the total landscape, except that IJI was greater for these age-classes than for the total landscape.


2018 ◽  
Vol 121 (16) ◽  
Author(s):  
Wei-Chia Chen ◽  
Ammar Tareen ◽  
Justin B. Kinney

Sign in / Sign up

Export Citation Format

Share Document