scholarly journals Horizontal accuracy assessment of a novel algorithm for approximate a surface to a DEM

2021 ◽  
Vol 4 ◽  
pp. 1-5
Author(s):  
Domingo Barrera ◽  
María José Ibáñez ◽  
Salah Eddargani ◽  
Rocio Romero ◽  
Francisco J. Ariza-López ◽  
...  

Abstract. This study evaluates the horizontal positional accuracy of a new algorithm that defines a surface that approximates DEM data by means of a spline function. This algorithm allows evaluating the surface at any point in its definition domain and allows analytically estimating other parameters of interest, such as slopes, orientations, etc. To evaluate the accuracy achieved with the algorithm, we use a reference DEM 2 m × 2 m (DEMref) from which the derived DEMs are obtained at 4 m × 4 m, 8 m × 8 m and 16 m × 16 m (DEMder). For each DEMder its spline approximant is calculated, which is evaluated at the same points occupied by the DEMref cells, getting a resampled DEM 2 × 2 m (DEMrem). The horizontal accuracy is obtained by computing the area amongs the homologous contour lines derived from DEMref and DEMrem, respectively. It has been observed that the planimetric errors of the proposed algorithm are very small, even in flat areas, where you could expect major differences. Therefore, this algorithm could be used when an evaluation of the horizontal positional accuracy of a DEM product at lower resolution (DEMpro) and a different producing source than the higher resolution DEMref is wanted.

Author(s):  
M. A. Brovelli ◽  
M. Minghini ◽  
M. E. Molinari ◽  
M. Molteni

In the past number of years there has been an amazing flourishing of spatial data products released with open licenses. Researchers and professionals are extensively exploiting open geodata for many applications, which, in turn, include decision-making results and other (derived) geospatial datasets among their outputs. Despite the traditional availability of metadata, a question arises about the actual quality of open geodata, as their declared quality is typically given for granted without any systematic assessment. The present work investigates the case study of Milan Municipality (Northern Italy). A wide set of open geodata are available for this area which are released by national, regional and local authoritative entities. A comprehensive cataloguing operation is first performed, with 1061 geospatial open datasets from Italian providers found which highly differ in terms of license, format, scale, content, and release date. Among the many quality parameters for geospatial data, the work focuses on positional accuracy. An example of positional accuracy assessment is described for an openly-licensed orthophoto through comparison with the official, up-to-date, and large-scale vector cartography of Milan. The comparison is run according to the guidelines provided by ISO and shows that the positional accuracy declared by the orthophoto provider does not correspond to the reality. Similar results are found from analyses on other datasets (not presented here). Implications are twofold: raising the awareness on the risks of using open geodata by taking their quality for granted; and highlighting the need for open geodata providers to introduce or refine mechanisms for data quality control.


2019 ◽  
Vol 9 (2) ◽  
pp. 178-185
Author(s):  
Raad A. Kattan ◽  
Farsat H. Abdulrahman

In this study, the geometric accuracy of four different maps for three sectors of Duhok city was assessed. The maps were produced in different periods and different techniques. One set of maps was paper plotted maps, which had to be geo-referenced. The other three maps were digitally plotted with reference to the global coordinate system UTM/WGS-84/Zone 38 N projection. A total of 51 points were identified on one reference map, which is the master plan of Duhok city prepared by the general directorate of urban planning/Kurdistan region/Iraq with the collaboration of the German company Ingenieurburo Vossing Company. The reference map, which is the master plan of Duhok governorate, is an official map that is certified and checked by the ministry of planning of the Kurdistan region to have a positional accuracy of ±1.5 cm. These points were searched for and identified on the other three maps. Discrepancies in Easting and Northings of these points were calculated, which resulted in the mean discrepancy of 2.29 m with a maximum value of 8.5 m in one event. The maximum standard deviation in dE and dN was 3.8 m. These values are reasonably accepted, considering that the maps were prepared using different techniques and a variable accuracy standard.


Geographies ◽  
2021 ◽  
Vol 1 (2) ◽  
pp. 143-165
Author(s):  
Jianyu Gu ◽  
Russell G. Congalton

Pixels, blocks (i.e., grouping of pixels), and polygons are the fundamental choices for use as assessment units for validating per-pixel image classification. Previous research conducted by the authors of this paper focused on the analysis of the impact of positional accuracy when using a single pixel for thematic accuracy assessment. The research described here provided a similar analysis, but the blocks of contiguous pixels were chosen as the assessment unit for thematic validation. The goal of this analysis was to assess the impact of positional errors on the thematic assessment. Factors including the size of a block, labeling threshold, landscape characteristics, spatial scale, and classification schemes were also considered. The results demonstrated that using blocks as an assessment unit reduced the thematic errors caused by positional errors to under 10% for most global land-cover mapping projects and most remote-sensing applications achieving a half-pixel registration. The larger the block size, the more the positional error was reduced. However, there are practical limitations to the size of the block. More classes in a classification scheme and higher heterogeneity increased the positional effect. The choice of labeling threshold depends on the spatial scale and landscape characteristics to balance the number of abandoned units and positional impact. This research suggests using the block of pixels as an assessment unit in the thematic accuracy assessment in future applications.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5423
Author(s):  
José A. Moreno-Ruiz ◽  
José R. García-Lázaro ◽  
Manuel Arbelo ◽  
Manuel Cantón-Garbín

This paper presents an accuracy assessment of the main global scale Burned Area (BA) products, derived from daily images of the Moderate-Resolution Imaging Spectroradiometer (MODIS) Fire_CCI 5.1 and MCD64A1 C6, as well as the previous versions of both products (Fire_CCI 4.1 and MCD45A1 C5). The exercise was conducted on the boreal region of Alaska during the period 2000–2017. All the BA polygons registered by the Alaska Fire Service were used as reference data. Both new versions doubled the annual BA estimate compared to the previous versions (66% for Fire_CCI 5.1 versus 35% for v4.1, and 63% for MCD64A1 C6 versus 28% for C5), reducing the omission error (OE) by almost one half (39% versus 67% for Fire_CCI and 48% versus 74% for MCD) and slightly increasing the commission error (CE) (7.5% versus 7% for Fire_CCI and 18% versus 7% for MCD). The Fire_CCI 5.1 product (CE = 7.5%, OE = 39%) presented the best results in terms of positional accuracy with respect to MCD64A1 C6 (CE = 18%, OE = 48%). These results suggest that Fire_CCI 5.1 could be suitable for those users who employ BA standard products in geoinformatics analysis techniques for wildfire management, especially in Boreal regions. The Pareto boundary analysis, performed on an annual basis, showed that there is still a potential theoretical capacity to improve the MODIS sensor-based BA algorithms.


2019 ◽  
Vol 8 (12) ◽  
pp. 552 ◽  
Author(s):  
Juan José Ruiz-Lendínez ◽  
Francisco Javier Ariza-López ◽  
Manuel Antonio Ureña-Cámara

Point-based standard methodologies (PBSM) suggest using ‘at least 20’ check points in order to assess the positional accuracy of a certain spatial dataset. However, the reason for decreasing the number of checkpoints to 20 is not elaborated upon in the original documents provided by the mapping agencies which develop these methodologies. By means of theoretical analysis and experimental tests, several authors and studies have demonstrated that this limited number of points is clearly insufficient. Using the point-based methodology for the automatic positional accuracy assessment of spatial data developed in our previous study Ruiz-Lendínez, et al (2017) and specifically, a subset of check points obtained from the application of this methodology to two urban spatial datasets, the variability of National Standard for Spatial Data Accuracy (NSSDA) estimations has been analyzed according to sample size. The results show that the variability of NSSDA estimations decreases when the number of check points increases, and also that these estimations have a tendency to underestimate accuracy. Finally, the graphical representation of the results can be employed in order to give some guidance on the recommended sample size when PBSMs are used.


2019 ◽  
Vol 25 (1) ◽  
Author(s):  
Leandro Luiz Silva de França ◽  
Alex de Lima Teodoro da Penha ◽  
João Alberto Batista de Carvalho

Abstract This paper presents a comparative study between the absolute and relative methods for altimetric positional accuracy of Digital Elevation Models (DEM). For the theoretical basis of this research, the definitions of accuracy (exactness) and precision, as well the concepts related to absolute and relative positional accuracy were explored. In the case study, the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and the Shuttle Radar Topography Mission (SRTM) DEM were used. In the analysis of the absolute accuracy, 6,568 ground control points from GNSS orbital survey were used, collected through relative-static method. In the relative accuracy, it was used as reference DEM with spatial resolution of 5 meters generated by stereophotogrammetrical process for the Mapping Project of Bahia (Brazil). It was concluded that, once the accuracy of the reference DEM is better than the other two evaluated DEM, the results of the classification for the PEC-PCD for the relative evaluation are equal to or better than the absolute evaluation results, with the advantage to being able to verify the pixel population of the evaluated models, which makes it possible to identify outliers, distortions and displacements, including delimiting regions, which is much less likely with a limited set of control points.


2021 ◽  
Vol 10 (7) ◽  
pp. 430
Author(s):  
Juan J. Ruiz-Lendínez ◽  
Manuel A. Ureña-Cámara ◽  
José L. Mesa-Mingorance ◽  
Francisco J. Quesada-Real

There are many studies related to Imagery Segmentation (IS) in the field of Geographic Information (GI). However, none of them address the assessment of IS results from a positional perspective. In a field in which the positional aspect is critical, it seems reasonable to think that the quality associated with this aspect must be controlled. This paper presents an automatic positional accuracy assessment (PAA) method for assessing this quality component of the regions obtained by means of the application of a textural segmentation algorithm to a Very High Resolution (VHR) aerial image. This method is based on the comparison between the ideal segmentation and the computed segmentation by counting their differences. Therefore, it has the same conceptual principles as the automatic procedures used in the evaluation of the GI´s positional accuracy. As in any PAA method, there are two key aspects related to the sample that were addressed: (i) its size—specifically, its influence on the uncertainty of the estimated accuracy values—and (ii) its categorization. Although the results obtained must be taken with caution, they made it clear that automatic PAA procedures, which are mainly applied to carry out the positional quality assessment of cartography, are valid for assessing the positional accuracy reached using other types of processes. Such is the case of the IS process presented in this study.


Author(s):  
S. Jovanovic ◽  
D. Jovanovic ◽  
G. Bratic ◽  
M. A. Brovelli

<p><strong>Abstract.</strong> Roads are one of the most important infrastructural objects for each country. Slow development of third world countries is partially influenced by missing roads. Therefore, United Nation (UN) enlisted them inside the ninth Sustainable Development Goal (SDG) whose achievement highly relies on geospatial data. Since the authoritative data for the majority of developing countries are incomplete and unavailable, the focus of this study is on free data. The conveyed research, explained in this paper, was divided in two parts. The first one refers to completeness and positional accuracy assessment of three different road data sets (freely available). The second part was focused only on OpenStreetMap (OSM) since it showed the best results in the previous stage. Thus, OSM was used to compute (in the second part of the research) and analyse the road accessibility rate within the buffer zone of two kilometers from human settlements. To locate human settlements, raster data, representing land covers were used. Results are pointing where the infrastructure is not mapped or is not present. The complete work was done using Free and Open Source Software, which is important, since the proposed procedure can be implemented by anyone.</p>


2020 ◽  
Vol 11 (87) ◽  
Author(s):  
Mariana Yurkiv ◽  
◽  
Yuliia Holubinka ◽  
Andrii Hoba ◽  
◽  
...  

The article considers the topic about assessing the accuracy of the plan of Lviv in 1878, which was published by Artaria & Co in a separate sheet from the administrative map of the Austrian cartographer and engineer Karl Richter van Kummersberg. This cartographic work was compiled on the basis of the Second Military Topographic Survey conducted in the Austrian Empire during 1855-1863, and occupies an important place in the study of architectural and urban planning of Lviv in Austrian times, before the great construction changes of the XIX century. Analysis of the accuracy of the old plans of Lviv is an important aspect in the study of these works, which allows you to assess their geometric features and obtain valuable information about the methods of their creation and processing techniques. Thus, it makes it possible to compare the cartographic, documentary and semantic value of ancient plans. The accuracy assessment methodology is based on the transformation and geometric analysis of sets of identical points on the old plan and the reference. Sets of control points are used to bring two cartographic products into a common coordinate system. The Helmert transformation with four parameters is used for such transformation. Identical points should be distributed over the entire area, ideally evenly, so that the resulting transformation key has a global character. According to the transformation key, multiquadratic interpolation is performed to construct a continuous surface from discrete data. The results of the latter make it possible to graphically visualize the errors of the old plan in the form of displacement vectors, isolines of scale and rotation, which significantly speeds up and simplifies the study of the accuracy of the old plans. In addition, using the method of least squares a value that characterizes the positional accuracy of the ancient plan was obtained. All calculations and constructions were performed in the MapAnalyst software product. The presented technique can be used for similar research on other cartographic works, and the obtained numerical results and graphical visualizations - to compare old plans with each other.


Sign in / Sign up

Export Citation Format

Share Document