scholarly journals A COMPUTATIONAL TOOL TO EVALUATE THE SAMPLE SIZE IN MAP POSITIONAL ACCURACY

2017 ◽  
Vol 23 (3) ◽  
pp. 445-460 ◽  
Author(s):  
Marcelo Antonio Nero ◽  
Jorge Pimentel Cintra ◽  
Gilberlan de Freitas Ferreira ◽  
Túllio Áullus Jó Pereira ◽  
Thaísa Santos Faria

Abstract: In many countries, the positional accuracy control by points in Cartography or Spatial data corresponds to the comparison between sets of coordinates of well-defined points in relation to the same set of points from a more accurate source. Usually, each country determines a maximum number of points which could present error values above a pre-established threshold. In many cases, the standards define the sample size as 20 points, with no more consideration, and fix this threshold in 10% of the sample. However, the sampling dimension (n), considering the statistical risk, especially when the percentages of outliers are around 10%, can lead to a producer risk (to reject a good map) and a user risk (to accept a bad map). This article analyzes this issue and allows defining the sampling dimension considering the risk of the producer and of the user. As a tool, a program developed by us allows defining the sample size according to the risk that the producer / user can or wants to assume. This analysis uses 600 control points, each of them with a known error. We performed the simulations with a sample size of 20 points (n) and calculate the associated risk. Then we changed the value of (n), using smaller and larger sizes, calculating for each situation the associated risk both for the user and for the producer. The computer program developed draws the operational curves or risk curves, which considers three parameters: the number of control points; the number of iterations to create the curves; and the percentage of control points above the threshold, that can be the Brazilian standard or other parameters from different countries. Several graphs and tables are presented which were created with different parameters, leading to a better decision both for the user and for the producer, as well as to open possibilities for other simulations and researches in the future.

2019 ◽  
Vol 8 (12) ◽  
pp. 552 ◽  
Author(s):  
Juan José Ruiz-Lendínez ◽  
Francisco Javier Ariza-López ◽  
Manuel Antonio Ureña-Cámara

Point-based standard methodologies (PBSM) suggest using ‘at least 20’ check points in order to assess the positional accuracy of a certain spatial dataset. However, the reason for decreasing the number of checkpoints to 20 is not elaborated upon in the original documents provided by the mapping agencies which develop these methodologies. By means of theoretical analysis and experimental tests, several authors and studies have demonstrated that this limited number of points is clearly insufficient. Using the point-based methodology for the automatic positional accuracy assessment of spatial data developed in our previous study Ruiz-Lendínez, et al (2017) and specifically, a subset of check points obtained from the application of this methodology to two urban spatial datasets, the variability of National Standard for Spatial Data Accuracy (NSSDA) estimations has been analyzed according to sample size. The results show that the variability of NSSDA estimations decreases when the number of check points increases, and also that these estimations have a tendency to underestimate accuracy. Finally, the graphical representation of the results can be employed in order to give some guidance on the recommended sample size when PBSMs are used.


2020 ◽  
Vol 64 (04) ◽  
pp. 489-507
Author(s):  
Mojca Kosmatin Fras ◽  
Urška Drešček ◽  
Anka Lisec ◽  
Dejan Grigillo

Unmanned aerial vehicles, equipped with various sensors and devices, are increasingly used to acquire geospatial data in geodesy, geoinformatics, and environmental studies. In this context, a new research and professional field has been developed – UAV photogrammetry – dealing with photogrammetry data acquisition and data processing, acquired by unmanned aerial vehicles. In this study, we analyse the selected factors that impact the quality of data provided using UAV photogrammetry, with the focus on positional accuracy; they are discussed in three groups: (a) factors related to the camera properties and the quality of images; (b) factors related to the mission planning and execution; and (c) factors related to the indirect georeferencing of images using ground control points. These selected factors are analysed based on the detailed review of relevant scientific publications. Additionally, the influence of the number of ground control points and their spatial distribution on point clouds' positional accuracy has been investigated for the case study. As the conclusion, key findings and recommendations for UAV photogrammetric projects are given; we have highlighted the importance of suitable lighting and weather conditions when performing UAV missions for spatial data acquisition, quality equipment, appropriate parameters of UAV data acquisition, and a sufficient number of ground control points, which should be determined with the appropriate positional accuracy and their correct distribution in the field.


2020 ◽  
Vol 12 (24) ◽  
pp. 4132
Author(s):  
Miguel Sánchez ◽  
Aurora Cuartero ◽  
Manuel Barrena ◽  
Antonio Plaza

This paper introduces a new method to analyze the positional accuracy of georeferenced satellite images without the use of ground control points. Compared to the traditional method used to carry out this kind of analysis, our approach provides a semiautomatic way to obtain a larger number of control points that satisfy the requirements of current standards regarding the size of the set of sample points, the positional accuracy of such points, the distance between points, and the distribution of points in the sample. Our methodology exploits high quality orthoimages, such as those provided by the Aerial Orthography National Plan (PNOA)—developed by the Spanish National Geographic Institute—and has been tested on spatial data from Landsat 8. Our method works under the current international standard (ASPRS 2014) and exhibits similar performance than other well-known methods to analyze the positional accuracy of georeferenced images based on the use of independent ground control points. More specifically, the positional accuracy achieved for a Landsat 8 dataset evaluated by the traditional method is 5.22 ± 1.95 m, and when evaluated with the proposed method, it exhibits a typical accuracy of 5.76 ± 0.50 m. Our experimental results confirm that the method is equally effective and less expensive than other available methods to analyze the positional accuracy of satellite images.


Drones ◽  
2020 ◽  
Vol 4 (2) ◽  
pp. 13 ◽  
Author(s):  
Margaret Kalacska ◽  
Oliver Lucanus ◽  
J. Pablo Arroyo-Mora ◽  
Étienne Laliberté ◽  
Kathryn Elmer ◽  
...  

The rapid increase of low-cost consumer-grade to enterprise-level unmanned aerial systems (UASs) has resulted in the exponential use of these systems in many applications. Structure from motion with multiview stereo (SfM-MVS) photogrammetry is now the baseline for the development of orthoimages and 3D surfaces (e.g., digital elevation models). The horizontal and vertical positional accuracies (x, y and z) of these products in general, rely heavily on the use of ground control points (GCPs). However, for many applications, the use of GCPs is not possible. Here we tested 14 UASs to assess the positional and within-model accuracy of SfM-MVS reconstructions of low-relief landscapes without GCPs ranging from consumer to enterprise-grade vertical takeoff and landing (VTOL) platforms. We found that high positional accuracy is not necessarily related to the platform cost or grade, rather the most important aspect is the use of post-processing kinetic (PPK) or real-time kinetic (RTK) solutions for geotagging the photographs. SfM-MVS products generated from UAS with onboard geotagging, regardless of grade, results in greater positional accuracies and lower within-model errors. We conclude that where repeatability and adherence to a high level of accuracy are needed, only RTK and PPK systems should be used without GCPs.


2016 ◽  
Vol 11 (3) ◽  
Author(s):  
Xiaohui Xu ◽  
Hui Hu ◽  
Sandie Ha ◽  
Daikwon Han

It is well known that the conventional, automated geocoding method based on self-reported residential addresses has many issues. We developed a smartphone-assisted aerial image-based method, which uses the Google Maps application programming interface as a spatial data collection tool during the birth registration process. In this pilot study, we have tested whether the smartphone-assisted method provides more accurate geographic information than the automated geocoding method in the scenario when both methods can get the address geocodes. We randomly selected 100 well-geocoded addresses among women who gave birth in Alachua county, Florida in 2012. We compared geocodes generated from three geocoding methods: i) the smartphone-assisted aerial image-based method; ii) the conventional, automated geocoding method; and iii) the global positioning system (GPS). We used the GPS data as the reference method. The automated geocoding method yielded positional errors larger than 100 m among 29.3% of addresses, while all addresses geocoded by the smartphoneassisted method had errors less than 100 m. The positional errors of the automated geocoding method were greater for apartment/condominiums compared with other dwellings and also for rural addresses compared with urban ones. We conclude that the smartphone-assisted method is a promising method for perspective spatial data collection by improving positional accuracy.


Author(s):  
M. A. Brovelli ◽  
M. Minghini ◽  
M. E. Molinari ◽  
M. Molteni

In the past number of years there has been an amazing flourishing of spatial data products released with open licenses. Researchers and professionals are extensively exploiting open geodata for many applications, which, in turn, include decision-making results and other (derived) geospatial datasets among their outputs. Despite the traditional availability of metadata, a question arises about the actual quality of open geodata, as their declared quality is typically given for granted without any systematic assessment. The present work investigates the case study of Milan Municipality (Northern Italy). A wide set of open geodata are available for this area which are released by national, regional and local authoritative entities. A comprehensive cataloguing operation is first performed, with 1061 geospatial open datasets from Italian providers found which highly differ in terms of license, format, scale, content, and release date. Among the many quality parameters for geospatial data, the work focuses on positional accuracy. An example of positional accuracy assessment is described for an openly-licensed orthophoto through comparison with the official, up-to-date, and large-scale vector cartography of Milan. The comparison is run according to the guidelines provided by ISO and shows that the positional accuracy declared by the orthophoto provider does not correspond to the reality. Similar results are found from analyses on other datasets (not presented here). Implications are twofold: raising the awareness on the risks of using open geodata by taking their quality for granted; and highlighting the need for open geodata providers to introduce or refine mechanisms for data quality control.


Author(s):  
M. Eshghi ◽  
A. A. Alesheikh

Recent advances in spatial data collection technologies and online services dramatically increase the contribution of ordinary people to produce, share, and use geographic information. Collecting spatial data as well as disseminating them on the internet by citizens has led to a huge source of spatial data termed as Volunteered Geographic Information (VGI) by Mike Goodchild. Although, VGI has produced previously unavailable data assets, and enriched existing ones. But its quality can be highly variable and challengeable. This presents several challenges to potential end users who are concerned about the validation and the quality assurance of the data which are collected. Almost, all the existing researches are based on how to find accurate VGI data from existing VGI data which consist of a) comparing the VGI data with the accurate official data, or b) in cases that there is no access to correct data; therefore, looking for an alternative way to determine the quality of VGI data is essential, and so forth. In this paper it has been attempt to develop a useful method to reach this goal. In this process, the positional accuracy of linear feature of Iran, Tehran OSM data have been analyzed.


Author(s):  
Itai Kloog ◽  
Lara Kaufman ◽  
Kees de Hoogh

There is an increase in the awareness of the importance of spatial data in epidemiology and exposure assessment (EA) studies. Most studies use governmental and ordnance surveys, which are often expensive and sparsely updated, while in most developing countries, there are often no official geo-spatial data sources. OpenStreetMap (OSM) is an open source Volunteered Geographic Information (VGI) mapping project. Yet very few environmental epidemiological and EA studies have used OSM as a source for road data. Since VGI data is either noncommercial or governmental, the validity of OSM is often questioned. We investigate the robustness and validity of OSM data for use in epidemiological and EA studies. We compared OSM and Governmental Major Road Data (GRD) in three different regions: Massachusetts, USA; Bern, Switzerland; and Beer-Sheva, South Israel. The comparison was done by calculating data completeness, positional accuracy, and EA using traditional exposure methods. We found that OSM data is fairly complete and accurate in all regions. The results in all regions were robust, with Massachusetts showing the best fits (R2 0.93). Results in Bern (R2 0.78) and Beer-Sheva (R2 0.77) were only slightly lower. We conclude by suggesting that OSM data can be used reliably in environmental assessment studies.


2019 ◽  
Vol 25 (1) ◽  
Author(s):  
Leandro Luiz Silva de França ◽  
Alex de Lima Teodoro da Penha ◽  
João Alberto Batista de Carvalho

Abstract This paper presents a comparative study between the absolute and relative methods for altimetric positional accuracy of Digital Elevation Models (DEM). For the theoretical basis of this research, the definitions of accuracy (exactness) and precision, as well the concepts related to absolute and relative positional accuracy were explored. In the case study, the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and the Shuttle Radar Topography Mission (SRTM) DEM were used. In the analysis of the absolute accuracy, 6,568 ground control points from GNSS orbital survey were used, collected through relative-static method. In the relative accuracy, it was used as reference DEM with spatial resolution of 5 meters generated by stereophotogrammetrical process for the Mapping Project of Bahia (Brazil). It was concluded that, once the accuracy of the reference DEM is better than the other two evaluated DEM, the results of the classification for the PEC-PCD for the relative evaluation are equal to or better than the absolute evaluation results, with the advantage to being able to verify the pixel population of the evaluated models, which makes it possible to identify outliers, distortions and displacements, including delimiting regions, which is much less likely with a limited set of control points.


Sign in / Sign up

Export Citation Format

Share Document