Positional Accuracy of Spatial Data: Non-Normal Distributions and a Critique of the National Standard for Spatial Data Accuracy

2008 ◽  
Vol 12 (1) ◽  
pp. 103-130 ◽  
Author(s):  
Paul A Zandbergen
2019 ◽  
Vol 8 (12) ◽  
pp. 552 ◽  
Author(s):  
Juan José Ruiz-Lendínez ◽  
Francisco Javier Ariza-López ◽  
Manuel Antonio Ureña-Cámara

Point-based standard methodologies (PBSM) suggest using ‘at least 20’ check points in order to assess the positional accuracy of a certain spatial dataset. However, the reason for decreasing the number of checkpoints to 20 is not elaborated upon in the original documents provided by the mapping agencies which develop these methodologies. By means of theoretical analysis and experimental tests, several authors and studies have demonstrated that this limited number of points is clearly insufficient. Using the point-based methodology for the automatic positional accuracy assessment of spatial data developed in our previous study Ruiz-Lendínez, et al (2017) and specifically, a subset of check points obtained from the application of this methodology to two urban spatial datasets, the variability of National Standard for Spatial Data Accuracy (NSSDA) estimations has been analyzed according to sample size. The results show that the variability of NSSDA estimations decreases when the number of check points increases, and also that these estimations have a tendency to underestimate accuracy. Finally, the graphical representation of the results can be employed in order to give some guidance on the recommended sample size when PBSMs are used.


2020 ◽  
Vol 26 (2) ◽  
pp. 70-84 ◽  
Author(s):  
Mervat S. Jasem ◽  
Odey AL-Hamadani

OpenStreetMap (OSM) represents the most common example of online volunteered mapping applications. Most of these platforms are open source spatial data collected by non-experts volunteers using different data collection methods. OSM project aims to provide a free digital map for all the world. The heterogeneity in data collection methods made OSM project databases accuracy is unreliable and must be dealt with caution for any engineering application. This study aims to assess the horizontal positional accuracy of three spatial data sources are OSM road network database, high-resolution Satellite Image (SI), and high-resolution Aerial Photo (AP) of Baghdad city with respect to an analogue formal road network dataset obtained from the Mayoralty of Baghdad (MB). The methodology of, U.S. National Standard Spatial Data Accuracy (NSSDA) was applied to measure the degree of agreement between each data source and the formal dataset (MB) in terms of horizontal positional accuracy by computing RMSE and NSSDA values. The study concluded that each of the three data sources does not agree with the MB dataset in both study sites AL-Aadhamiyah and AL-Kadhumiyah in terms of positional accuracy.


2021 ◽  
pp. 33-45
Author(s):  
Francisco Javier Ariza-López ◽  
Juan Francisco Reinoso-Gordo

Tradicionalmente, la evaluación de la exactitud altimétrica de modelos digitales de elevaciones del terreno (MDE), se ha realizado aplicando estándares (p.ej. National Standard for Spatial Data Accuracy) basados en el muestreo de puntos en el modelo de referencia (S1) y en el modelo a evaluar (S2). Estos estándares plantean dos inconvenientes: 1) los puntos utilizados en las evaluaciones son escasos comparados con la superficie total de un MDE y, por tanto, dejan gran parte del terreno sin evaluar, 2) la evaluación de un elemento superficial se realiza por comparación de elementos puntuales, cuando parece más adecuado evaluar por comparación de superficies. Ambos inconvenientes pueden ser superados si se utilizan métodos de orlado sobre superficies. En este trabajo se presentan por primera vez, en el ámbito de la evaluación altimétrica de MDE, los métodos de orlado simple (MOS) y doble (MOD) sobre superficies. El estudio se ha realizado sobre datos sintéticos que permiten plantear una situación de estudio predeterminada. Se ha demostrado que ambos métodos permiten la detección de atípicos y sesgos al evaluar S2. Además, se pueden elaborar funciones de distribución observadas, que eliminen la necesidad considerar hipótesis de normalidad sobre las discrepancias.


2017 ◽  
pp. 305-328
Author(s):  
Earl F. Burkholder
Keyword(s):  

2016 ◽  
Vol 11 (3) ◽  
Author(s):  
Xiaohui Xu ◽  
Hui Hu ◽  
Sandie Ha ◽  
Daikwon Han

It is well known that the conventional, automated geocoding method based on self-reported residential addresses has many issues. We developed a smartphone-assisted aerial image-based method, which uses the Google Maps application programming interface as a spatial data collection tool during the birth registration process. In this pilot study, we have tested whether the smartphone-assisted method provides more accurate geographic information than the automated geocoding method in the scenario when both methods can get the address geocodes. We randomly selected 100 well-geocoded addresses among women who gave birth in Alachua county, Florida in 2012. We compared geocodes generated from three geocoding methods: i) the smartphone-assisted aerial image-based method; ii) the conventional, automated geocoding method; and iii) the global positioning system (GPS). We used the GPS data as the reference method. The automated geocoding method yielded positional errors larger than 100 m among 29.3% of addresses, while all addresses geocoded by the smartphoneassisted method had errors less than 100 m. The positional errors of the automated geocoding method were greater for apartment/condominiums compared with other dwellings and also for rural addresses compared with urban ones. We conclude that the smartphone-assisted method is a promising method for perspective spatial data collection by improving positional accuracy.


Author(s):  
M. A. Brovelli ◽  
M. Minghini ◽  
M. E. Molinari ◽  
M. Molteni

In the past number of years there has been an amazing flourishing of spatial data products released with open licenses. Researchers and professionals are extensively exploiting open geodata for many applications, which, in turn, include decision-making results and other (derived) geospatial datasets among their outputs. Despite the traditional availability of metadata, a question arises about the actual quality of open geodata, as their declared quality is typically given for granted without any systematic assessment. The present work investigates the case study of Milan Municipality (Northern Italy). A wide set of open geodata are available for this area which are released by national, regional and local authoritative entities. A comprehensive cataloguing operation is first performed, with 1061 geospatial open datasets from Italian providers found which highly differ in terms of license, format, scale, content, and release date. Among the many quality parameters for geospatial data, the work focuses on positional accuracy. An example of positional accuracy assessment is described for an openly-licensed orthophoto through comparison with the official, up-to-date, and large-scale vector cartography of Milan. The comparison is run according to the guidelines provided by ISO and shows that the positional accuracy declared by the orthophoto provider does not correspond to the reality. Similar results are found from analyses on other datasets (not presented here). Implications are twofold: raising the awareness on the risks of using open geodata by taking their quality for granted; and highlighting the need for open geodata providers to introduce or refine mechanisms for data quality control.


Author(s):  
M. Eshghi ◽  
A. A. Alesheikh

Recent advances in spatial data collection technologies and online services dramatically increase the contribution of ordinary people to produce, share, and use geographic information. Collecting spatial data as well as disseminating them on the internet by citizens has led to a huge source of spatial data termed as Volunteered Geographic Information (VGI) by Mike Goodchild. Although, VGI has produced previously unavailable data assets, and enriched existing ones. But its quality can be highly variable and challengeable. This presents several challenges to potential end users who are concerned about the validation and the quality assurance of the data which are collected. Almost, all the existing researches are based on how to find accurate VGI data from existing VGI data which consist of a) comparing the VGI data with the accurate official data, or b) in cases that there is no access to correct data; therefore, looking for an alternative way to determine the quality of VGI data is essential, and so forth. In this paper it has been attempt to develop a useful method to reach this goal. In this process, the positional accuracy of linear feature of Iran, Tehran OSM data have been analyzed.


2017 ◽  
Vol 23 (3) ◽  
pp. 445-460 ◽  
Author(s):  
Marcelo Antonio Nero ◽  
Jorge Pimentel Cintra ◽  
Gilberlan de Freitas Ferreira ◽  
Túllio Áullus Jó Pereira ◽  
Thaísa Santos Faria

Abstract: In many countries, the positional accuracy control by points in Cartography or Spatial data corresponds to the comparison between sets of coordinates of well-defined points in relation to the same set of points from a more accurate source. Usually, each country determines a maximum number of points which could present error values above a pre-established threshold. In many cases, the standards define the sample size as 20 points, with no more consideration, and fix this threshold in 10% of the sample. However, the sampling dimension (n), considering the statistical risk, especially when the percentages of outliers are around 10%, can lead to a producer risk (to reject a good map) and a user risk (to accept a bad map). This article analyzes this issue and allows defining the sampling dimension considering the risk of the producer and of the user. As a tool, a program developed by us allows defining the sample size according to the risk that the producer / user can or wants to assume. This analysis uses 600 control points, each of them with a known error. We performed the simulations with a sample size of 20 points (n) and calculate the associated risk. Then we changed the value of (n), using smaller and larger sizes, calculating for each situation the associated risk both for the user and for the producer. The computer program developed draws the operational curves or risk curves, which considers three parameters: the number of control points; the number of iterations to create the curves; and the percentage of control points above the threshold, that can be the Brazilian standard or other parameters from different countries. Several graphs and tables are presented which were created with different parameters, leading to a better decision both for the user and for the producer, as well as to open possibilities for other simulations and researches in the future.


Author(s):  
Itai Kloog ◽  
Lara Kaufman ◽  
Kees de Hoogh

There is an increase in the awareness of the importance of spatial data in epidemiology and exposure assessment (EA) studies. Most studies use governmental and ordnance surveys, which are often expensive and sparsely updated, while in most developing countries, there are often no official geo-spatial data sources. OpenStreetMap (OSM) is an open source Volunteered Geographic Information (VGI) mapping project. Yet very few environmental epidemiological and EA studies have used OSM as a source for road data. Since VGI data is either noncommercial or governmental, the validity of OSM is often questioned. We investigate the robustness and validity of OSM data for use in epidemiological and EA studies. We compared OSM and Governmental Major Road Data (GRD) in three different regions: Massachusetts, USA; Bern, Switzerland; and Beer-Sheva, South Israel. The comparison was done by calculating data completeness, positional accuracy, and EA using traditional exposure methods. We found that OSM data is fairly complete and accurate in all regions. The results in all regions were robust, with Massachusetts showing the best fits (R2 0.93). Results in Bern (R2 0.78) and Beer-Sheva (R2 0.77) were only slightly lower. We conclude by suggesting that OSM data can be used reliably in environmental assessment studies.


Sign in / Sign up

Export Citation Format

Share Document