Full Positional Accuracy Analysis of Spatial Data by Means of Circular Statistics

2010 ◽  
Vol 14 (4) ◽  
pp. 421-434 ◽  
Author(s):  
María-Eugenia Polo ◽  
Ángel M. Felicísimo
2010 ◽  
Vol 76 (11) ◽  
pp. 1275-1286 ◽  
Author(s):  
A. Cuartero ◽  
A.M. Felicísimo ◽  
M.E. Polo ◽  
A. Caro ◽  
P.G. Rodríguez

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Prafullata Kiran Auradkar ◽  
Atharva Raykar ◽  
Ishitha Agarwal ◽  
Dinkar Sitaram ◽  
Manavalan R.

Purpose The purpose of this paper is to convert real-world raster data into vector format and evaluate loss of accuracy in the conversion process. Open-source Geographic Information System (GIS) is used in this process and system resource utilizations were measured for conversion and accuracy analysis methods. Shape complexity attributes were analyzed in co-relation to the observed conversion errors. Design/methodology/approach The paper empirically evaluated the challenges and overheads involved in the format conversion algorithms available in open-source GIS with real-world land use and land cover (LULC) map data of India. Across the different LULC categories, geometric errors of varying density were observed in Quantum GIS (QGIS) algorithm. Area extents of original raster data were compared to the vector forms and the shape attributes such as average number of vertices and shape irregularity were evaluated to explore the possible correlation. Findings The results indicate that Geographic Resources Analysis Support System provides near error-free conversion algorithm. At the same time, the overall time taken for the conversion and the system resource utilizations were optimum as compared to the QGIS algorithm. Higher vector file sizes were generalized and accuracy loss was tested. Research limitations/implications Complete shape complexity analysis could not be achieved, as the weight factor for the irregularity of the shapes is to be varied based on the demography as well as on the LULC category. Practical implications Because of the higher system resource requirements of topological checker tool, positional accuracy checks for the converted objects could not be completed. Originality/value This paper addresses the need of accuracy analysis of real-world spatial data conversions from raster to vector format along with experimental setups challenges and impact of shape complexity.


2016 ◽  
Vol 11 (3) ◽  
Author(s):  
Xiaohui Xu ◽  
Hui Hu ◽  
Sandie Ha ◽  
Daikwon Han

It is well known that the conventional, automated geocoding method based on self-reported residential addresses has many issues. We developed a smartphone-assisted aerial image-based method, which uses the Google Maps application programming interface as a spatial data collection tool during the birth registration process. In this pilot study, we have tested whether the smartphone-assisted method provides more accurate geographic information than the automated geocoding method in the scenario when both methods can get the address geocodes. We randomly selected 100 well-geocoded addresses among women who gave birth in Alachua county, Florida in 2012. We compared geocodes generated from three geocoding methods: i) the smartphone-assisted aerial image-based method; ii) the conventional, automated geocoding method; and iii) the global positioning system (GPS). We used the GPS data as the reference method. The automated geocoding method yielded positional errors larger than 100 m among 29.3% of addresses, while all addresses geocoded by the smartphoneassisted method had errors less than 100 m. The positional errors of the automated geocoding method were greater for apartment/condominiums compared with other dwellings and also for rural addresses compared with urban ones. We conclude that the smartphone-assisted method is a promising method for perspective spatial data collection by improving positional accuracy.


Author(s):  
M. A. Brovelli ◽  
M. Minghini ◽  
M. E. Molinari ◽  
M. Molteni

In the past number of years there has been an amazing flourishing of spatial data products released with open licenses. Researchers and professionals are extensively exploiting open geodata for many applications, which, in turn, include decision-making results and other (derived) geospatial datasets among their outputs. Despite the traditional availability of metadata, a question arises about the actual quality of open geodata, as their declared quality is typically given for granted without any systematic assessment. The present work investigates the case study of Milan Municipality (Northern Italy). A wide set of open geodata are available for this area which are released by national, regional and local authoritative entities. A comprehensive cataloguing operation is first performed, with 1061 geospatial open datasets from Italian providers found which highly differ in terms of license, format, scale, content, and release date. Among the many quality parameters for geospatial data, the work focuses on positional accuracy. An example of positional accuracy assessment is described for an openly-licensed orthophoto through comparison with the official, up-to-date, and large-scale vector cartography of Milan. The comparison is run according to the guidelines provided by ISO and shows that the positional accuracy declared by the orthophoto provider does not correspond to the reality. Similar results are found from analyses on other datasets (not presented here). Implications are twofold: raising the awareness on the risks of using open geodata by taking their quality for granted; and highlighting the need for open geodata providers to introduce or refine mechanisms for data quality control.


Author(s):  
M. Eshghi ◽  
A. A. Alesheikh

Recent advances in spatial data collection technologies and online services dramatically increase the contribution of ordinary people to produce, share, and use geographic information. Collecting spatial data as well as disseminating them on the internet by citizens has led to a huge source of spatial data termed as Volunteered Geographic Information (VGI) by Mike Goodchild. Although, VGI has produced previously unavailable data assets, and enriched existing ones. But its quality can be highly variable and challengeable. This presents several challenges to potential end users who are concerned about the validation and the quality assurance of the data which are collected. Almost, all the existing researches are based on how to find accurate VGI data from existing VGI data which consist of a) comparing the VGI data with the accurate official data, or b) in cases that there is no access to correct data; therefore, looking for an alternative way to determine the quality of VGI data is essential, and so forth. In this paper it has been attempt to develop a useful method to reach this goal. In this process, the positional accuracy of linear feature of Iran, Tehran OSM data have been analyzed.


2014 ◽  
Vol 8 (1) ◽  
pp. 382-386
Author(s):  
Y. Guo ◽  
Y. P. Jin ◽  
M. Jiang ◽  
B. W. Luo

The positional accuracy of disc storing mechanism for benthic drill is the guarantee of long hole coring in deep sea. Aiming the lack of positional accuracy analysis on disc storing mechanism, the mathematic model of the positional accuracy for disc storing mechanism is presented by using complex vector and matrix analyzing method. The analytical formula of crank rotation positional accuracy is acquired through rotation positional analysis of crank in disc storing mechanism driven by hydraulic cylinder. Adopting numerical nonlinear iteration solution method of Newton-Simpson, the variation rule of rotation positional error for disc storing mechanism to cylinder length is acquired, which supports an important theory, leading to tolerance design for dimensional parameters of disc storing mechanism.


2017 ◽  
Vol 23 (3) ◽  
pp. 445-460 ◽  
Author(s):  
Marcelo Antonio Nero ◽  
Jorge Pimentel Cintra ◽  
Gilberlan de Freitas Ferreira ◽  
Túllio Áullus Jó Pereira ◽  
Thaísa Santos Faria

Abstract: In many countries, the positional accuracy control by points in Cartography or Spatial data corresponds to the comparison between sets of coordinates of well-defined points in relation to the same set of points from a more accurate source. Usually, each country determines a maximum number of points which could present error values above a pre-established threshold. In many cases, the standards define the sample size as 20 points, with no more consideration, and fix this threshold in 10% of the sample. However, the sampling dimension (n), considering the statistical risk, especially when the percentages of outliers are around 10%, can lead to a producer risk (to reject a good map) and a user risk (to accept a bad map). This article analyzes this issue and allows defining the sampling dimension considering the risk of the producer and of the user. As a tool, a program developed by us allows defining the sample size according to the risk that the producer / user can or wants to assume. This analysis uses 600 control points, each of them with a known error. We performed the simulations with a sample size of 20 points (n) and calculate the associated risk. Then we changed the value of (n), using smaller and larger sizes, calculating for each situation the associated risk both for the user and for the producer. The computer program developed draws the operational curves or risk curves, which considers three parameters: the number of control points; the number of iterations to create the curves; and the percentage of control points above the threshold, that can be the Brazilian standard or other parameters from different countries. Several graphs and tables are presented which were created with different parameters, leading to a better decision both for the user and for the producer, as well as to open possibilities for other simulations and researches in the future.


Author(s):  
Itai Kloog ◽  
Lara Kaufman ◽  
Kees de Hoogh

There is an increase in the awareness of the importance of spatial data in epidemiology and exposure assessment (EA) studies. Most studies use governmental and ordnance surveys, which are often expensive and sparsely updated, while in most developing countries, there are often no official geo-spatial data sources. OpenStreetMap (OSM) is an open source Volunteered Geographic Information (VGI) mapping project. Yet very few environmental epidemiological and EA studies have used OSM as a source for road data. Since VGI data is either noncommercial or governmental, the validity of OSM is often questioned. We investigate the robustness and validity of OSM data for use in epidemiological and EA studies. We compared OSM and Governmental Major Road Data (GRD) in three different regions: Massachusetts, USA; Bern, Switzerland; and Beer-Sheva, South Israel. The comparison was done by calculating data completeness, positional accuracy, and EA using traditional exposure methods. We found that OSM data is fairly complete and accurate in all regions. The results in all regions were robust, with Massachusetts showing the best fits (R2 0.93). Results in Bern (R2 0.78) and Beer-Sheva (R2 0.77) were only slightly lower. We conclude by suggesting that OSM data can be used reliably in environmental assessment studies.


2019 ◽  
Vol 8 (12) ◽  
pp. 552 ◽  
Author(s):  
Juan José Ruiz-Lendínez ◽  
Francisco Javier Ariza-López ◽  
Manuel Antonio Ureña-Cámara

Point-based standard methodologies (PBSM) suggest using ‘at least 20’ check points in order to assess the positional accuracy of a certain spatial dataset. However, the reason for decreasing the number of checkpoints to 20 is not elaborated upon in the original documents provided by the mapping agencies which develop these methodologies. By means of theoretical analysis and experimental tests, several authors and studies have demonstrated that this limited number of points is clearly insufficient. Using the point-based methodology for the automatic positional accuracy assessment of spatial data developed in our previous study Ruiz-Lendínez, et al (2017) and specifically, a subset of check points obtained from the application of this methodology to two urban spatial datasets, the variability of National Standard for Spatial Data Accuracy (NSSDA) estimations has been analyzed according to sample size. The results show that the variability of NSSDA estimations decreases when the number of check points increases, and also that these estimations have a tendency to underestimate accuracy. Finally, the graphical representation of the results can be employed in order to give some guidance on the recommended sample size when PBSMs are used.


Sign in / Sign up

Export Citation Format

Share Document