scholarly journals Study of NSSDA Variability by Means of Automatic Positional Accuracy Assessment Methods

2019 ◽  
Vol 8 (12) ◽  
pp. 552 ◽  
Author(s):  
Juan José Ruiz-Lendínez ◽  
Francisco Javier Ariza-López ◽  
Manuel Antonio Ureña-Cámara

Point-based standard methodologies (PBSM) suggest using ‘at least 20’ check points in order to assess the positional accuracy of a certain spatial dataset. However, the reason for decreasing the number of checkpoints to 20 is not elaborated upon in the original documents provided by the mapping agencies which develop these methodologies. By means of theoretical analysis and experimental tests, several authors and studies have demonstrated that this limited number of points is clearly insufficient. Using the point-based methodology for the automatic positional accuracy assessment of spatial data developed in our previous study Ruiz-Lendínez, et al (2017) and specifically, a subset of check points obtained from the application of this methodology to two urban spatial datasets, the variability of National Standard for Spatial Data Accuracy (NSSDA) estimations has been analyzed according to sample size. The results show that the variability of NSSDA estimations decreases when the number of check points increases, and also that these estimations have a tendency to underestimate accuracy. Finally, the graphical representation of the results can be employed in order to give some guidance on the recommended sample size when PBSMs are used.

Author(s):  
M. A. Brovelli ◽  
M. Minghini ◽  
M. E. Molinari ◽  
M. Molteni

In the past number of years there has been an amazing flourishing of spatial data products released with open licenses. Researchers and professionals are extensively exploiting open geodata for many applications, which, in turn, include decision-making results and other (derived) geospatial datasets among their outputs. Despite the traditional availability of metadata, a question arises about the actual quality of open geodata, as their declared quality is typically given for granted without any systematic assessment. The present work investigates the case study of Milan Municipality (Northern Italy). A wide set of open geodata are available for this area which are released by national, regional and local authoritative entities. A comprehensive cataloguing operation is first performed, with 1061 geospatial open datasets from Italian providers found which highly differ in terms of license, format, scale, content, and release date. Among the many quality parameters for geospatial data, the work focuses on positional accuracy. An example of positional accuracy assessment is described for an openly-licensed orthophoto through comparison with the official, up-to-date, and large-scale vector cartography of Milan. The comparison is run according to the guidelines provided by ISO and shows that the positional accuracy declared by the orthophoto provider does not correspond to the reality. Similar results are found from analyses on other datasets (not presented here). Implications are twofold: raising the awareness on the risks of using open geodata by taking their quality for granted; and highlighting the need for open geodata providers to introduce or refine mechanisms for data quality control.


2017 ◽  
Vol 23 (3) ◽  
pp. 445-460 ◽  
Author(s):  
Marcelo Antonio Nero ◽  
Jorge Pimentel Cintra ◽  
Gilberlan de Freitas Ferreira ◽  
Túllio Áullus Jó Pereira ◽  
Thaísa Santos Faria

Abstract: In many countries, the positional accuracy control by points in Cartography or Spatial data corresponds to the comparison between sets of coordinates of well-defined points in relation to the same set of points from a more accurate source. Usually, each country determines a maximum number of points which could present error values above a pre-established threshold. In many cases, the standards define the sample size as 20 points, with no more consideration, and fix this threshold in 10% of the sample. However, the sampling dimension (n), considering the statistical risk, especially when the percentages of outliers are around 10%, can lead to a producer risk (to reject a good map) and a user risk (to accept a bad map). This article analyzes this issue and allows defining the sampling dimension considering the risk of the producer and of the user. As a tool, a program developed by us allows defining the sample size according to the risk that the producer / user can or wants to assume. This analysis uses 600 control points, each of them with a known error. We performed the simulations with a sample size of 20 points (n) and calculate the associated risk. Then we changed the value of (n), using smaller and larger sizes, calculating for each situation the associated risk both for the user and for the producer. The computer program developed draws the operational curves or risk curves, which considers three parameters: the number of control points; the number of iterations to create the curves; and the percentage of control points above the threshold, that can be the Brazilian standard or other parameters from different countries. Several graphs and tables are presented which were created with different parameters, leading to a better decision both for the user and for the producer, as well as to open possibilities for other simulations and researches in the future.


2018 ◽  
Vol 7 (6) ◽  
pp. 200 ◽  
Author(s):  
Francisco Ariza-López ◽  
Juan Ruiz-Lendínez ◽  
Manuel Ureña-Cámara

2021 ◽  
Vol 10 (5) ◽  
pp. 289
Author(s):  
Juan José Ruiz-Lendínez ◽  
Francisco Javier Ariza-López ◽  
Manuel Antonio Ureña-Cámara

The continuous development of machine learning procedures and the development of new ways of mapping based on the integration of spatial data from heterogeneous sources have resulted in the automation of many processes associated with cartographic production such as positional accuracy assessment (PAA). The automation of the PAA of spatial data is based on automated matching procedures between corresponding spatial objects (usually building polygons) from two geospatial databases (GDB), which in turn are related to the quantification of the similarity between these objects. Therefore, assessing the capabilities of these automated matching procedures is key to making automation a fully operational solution in PAA processes. The present study has been developed in response to the need to explore the scope of these capabilities by means of a comparison with human capabilities. Thus, using a genetic algorithm (GA) and a group of human experts, two experiments have been carried out: (i) to compare the similarity values between building polygons assigned by both and (ii) to compare the matching procedure developed in both cases. The results obtained showed that the GA—experts agreement was very high, with a mean agreement percentage of 93.3% (for the experiment 1) and 98.8% (for the experiment 2). These results confirm the capability of the machine-based procedures, and specifically of GAs, to carry out matching tasks.


2014 ◽  
Vol 49 (2) ◽  
pp. 101-106 ◽  
Author(s):  
Ashraf Farah ◽  
Dafer Algarni

ABSTRACT Google Earth is a virtual globe, map and geographical information program that is controlled by Google corporation. It maps the Earth by the superimposition of images obtained from satellite imagery, aerial photography and GIS 3D globe. With millions of users all around the globe, GoogleEarth® has become the ultimate source of spatial data and information for private and public decision-support systems besides many types and forms of social interactions. Many users mostly in developing countries are also using it for surveying applications, the matter that raises questions about the positional accuracy of the Google Earth program. This research presents a small-scale assessment study of the positional accuracy of GoogleEarth® Imagery in Riyadh; capital of Kingdom of Saudi Arabia (KSA). The results show that the RMSE of the GoogleEarth imagery is 2.18 m and 1.51 m for the horizontal and height coordinates respectively.


Author(s):  
M. A. Brovelli ◽  
M. Minghini ◽  
M. E. Molinari ◽  
M. Molteni

In the past number of years there has been an amazing flourishing of spatial data products released with open licenses. Researchers and professionals are extensively exploiting open geodata for many applications, which, in turn, include decision-making results and other (derived) geospatial datasets among their outputs. Despite the traditional availability of metadata, a question arises about the actual quality of open geodata, as their declared quality is typically given for granted without any systematic assessment. The present work investigates the case study of Milan Municipality (Northern Italy). A wide set of open geodata are available for this area which are released by national, regional and local authoritative entities. A comprehensive cataloguing operation is first performed, with 1061 geospatial open datasets from Italian providers found which highly differ in terms of license, format, scale, content, and release date. Among the many quality parameters for geospatial data, the work focuses on positional accuracy. An example of positional accuracy assessment is described for an openly-licensed orthophoto through comparison with the official, up-to-date, and large-scale vector cartography of Milan. The comparison is run according to the guidelines provided by ISO and shows that the positional accuracy declared by the orthophoto provider does not correspond to the reality. Similar results are found from analyses on other datasets (not presented here). Implications are twofold: raising the awareness on the risks of using open geodata by taking their quality for granted; and highlighting the need for open geodata providers to introduce or refine mechanisms for data quality control.


2020 ◽  
Vol 26 (2) ◽  
pp. 70-84 ◽  
Author(s):  
Mervat S. Jasem ◽  
Odey AL-Hamadani

OpenStreetMap (OSM) represents the most common example of online volunteered mapping applications. Most of these platforms are open source spatial data collected by non-experts volunteers using different data collection methods. OSM project aims to provide a free digital map for all the world. The heterogeneity in data collection methods made OSM project databases accuracy is unreliable and must be dealt with caution for any engineering application. This study aims to assess the horizontal positional accuracy of three spatial data sources are OSM road network database, high-resolution Satellite Image (SI), and high-resolution Aerial Photo (AP) of Baghdad city with respect to an analogue formal road network dataset obtained from the Mayoralty of Baghdad (MB). The methodology of, U.S. National Standard Spatial Data Accuracy (NSSDA) was applied to measure the degree of agreement between each data source and the formal dataset (MB) in terms of horizontal positional accuracy by computing RMSE and NSSDA values. The study concluded that each of the three data sources does not agree with the MB dataset in both study sites AL-Aadhamiyah and AL-Kadhumiyah in terms of positional accuracy.


Author(s):  
Z. G. Sisay ◽  
T. Besha ◽  
B. Gessesse

This study used in-situ GPS data to validate the accuracy of horizontal coordinates and orientation of linear features of orthophoto and line map for Bahir Dar city. GPS data is processed using GAMIT/GLOBK and Lieca GeoOfice (LGO) in a least square sense with a tie to local and regional GPS reference stations to predict horizontal coordinates at five checkpoints. Real-Time-Kinematic GPS measurement technique is used to collect the coordinates of road centerline to test the accuracy associated with the orientation of the photogrammetric line map. The accuracy of orthophoto was evaluated by comparing with in-situ GPS coordinates and it is in a good agreement with a root mean square error (RMSE) of 12.45&amp;thinsp;cm in x- and 13.97&amp;thinsp;cm in y-coordinates, on the other hand, 6.06&amp;thinsp;cm with 95&amp;thinsp;% confidence level – GPS coordinates from GAMIT/GLOBK. <br><br> Whereas, the horizontal coordinates of the orthophoto are in agreement with in-situ GPS coordinates at an accuracy of 16.71&amp;thinsp;cm and 18.98&amp;thinsp;cm in x and y-directions respectively and 11.07&amp;thinsp;cm with 95&amp;thinsp;% confidence level – GPS data is processed by LGO and a tie to local GPS network. Similarly, the accuracy of linear feature is in a good fit with in-situ GPS measurement. The GPS coordinates of the road centerline deviates from the corresponding coordinates of line map by a mean value of 9.18&amp;thinsp;cm in x- direction and &amp;minus;14.96&amp;thinsp;cm in y-direction. Therefore, it can be concluded that, the accuracy of the orthophoto and line map is within the national standard of error budget (~&amp;thinsp;25&amp;thinsp;cm).


Sign in / Sign up

Export Citation Format

Share Document