scholarly journals DO OPEN GEODATA ACTUALLY HAVE THE QUALITY THEY DECLARE? THE CASE STUDY OF MILAN, ITALY

Author(s):  
M. A. Brovelli ◽  
M. Minghini ◽  
M. E. Molinari ◽  
M. Molteni

In the past number of years there has been an amazing flourishing of spatial data products released with open licenses. Researchers and professionals are extensively exploiting open geodata for many applications, which, in turn, include decision-making results and other (derived) geospatial datasets among their outputs. Despite the traditional availability of metadata, a question arises about the actual quality of open geodata, as their declared quality is typically given for granted without any systematic assessment. The present work investigates the case study of Milan Municipality (Northern Italy). A wide set of open geodata are available for this area which are released by national, regional and local authoritative entities. A comprehensive cataloguing operation is first performed, with 1061 geospatial open datasets from Italian providers found which highly differ in terms of license, format, scale, content, and release date. Among the many quality parameters for geospatial data, the work focuses on positional accuracy. An example of positional accuracy assessment is described for an openly-licensed orthophoto through comparison with the official, up-to-date, and large-scale vector cartography of Milan. The comparison is run according to the guidelines provided by ISO and shows that the positional accuracy declared by the orthophoto provider does not correspond to the reality. Similar results are found from analyses on other datasets (not presented here). Implications are twofold: raising the awareness on the risks of using open geodata by taking their quality for granted; and highlighting the need for open geodata providers to introduce or refine mechanisms for data quality control.

Author(s):  
M. A. Brovelli ◽  
M. Minghini ◽  
M. E. Molinari ◽  
M. Molteni

In the past number of years there has been an amazing flourishing of spatial data products released with open licenses. Researchers and professionals are extensively exploiting open geodata for many applications, which, in turn, include decision-making results and other (derived) geospatial datasets among their outputs. Despite the traditional availability of metadata, a question arises about the actual quality of open geodata, as their declared quality is typically given for granted without any systematic assessment. The present work investigates the case study of Milan Municipality (Northern Italy). A wide set of open geodata are available for this area which are released by national, regional and local authoritative entities. A comprehensive cataloguing operation is first performed, with 1061 geospatial open datasets from Italian providers found which highly differ in terms of license, format, scale, content, and release date. Among the many quality parameters for geospatial data, the work focuses on positional accuracy. An example of positional accuracy assessment is described for an openly-licensed orthophoto through comparison with the official, up-to-date, and large-scale vector cartography of Milan. The comparison is run according to the guidelines provided by ISO and shows that the positional accuracy declared by the orthophoto provider does not correspond to the reality. Similar results are found from analyses on other datasets (not presented here). Implications are twofold: raising the awareness on the risks of using open geodata by taking their quality for granted; and highlighting the need for open geodata providers to introduce or refine mechanisms for data quality control.


Author(s):  
Jiri Panek

Crowdsroucing of emotional information can take many forms, from social networks data mining to large-scale surveys. The author presents the case-study of emotional mapping in Ostrava´s district Ostrava-Poruba, Czech Republic. Together with the local administration, the author crowdsourced the emotional perceptions of the location from almost 400 citizens, who created 4,051 spatial features. Additional to the spatial data there were 1,244 comments and suggestions for improvements in the district. Furthermore, the author is looking for patterns and hot-spots within the city and if there are any relevant linkages between certain emotions and spatial locations within the city.


Author(s):  
Itai Kloog ◽  
Lara Kaufman ◽  
Kees de Hoogh

There is an increase in the awareness of the importance of spatial data in epidemiology and exposure assessment (EA) studies. Most studies use governmental and ordnance surveys, which are often expensive and sparsely updated, while in most developing countries, there are often no official geo-spatial data sources. OpenStreetMap (OSM) is an open source Volunteered Geographic Information (VGI) mapping project. Yet very few environmental epidemiological and EA studies have used OSM as a source for road data. Since VGI data is either noncommercial or governmental, the validity of OSM is often questioned. We investigate the robustness and validity of OSM data for use in epidemiological and EA studies. We compared OSM and Governmental Major Road Data (GRD) in three different regions: Massachusetts, USA; Bern, Switzerland; and Beer-Sheva, South Israel. The comparison was done by calculating data completeness, positional accuracy, and EA using traditional exposure methods. We found that OSM data is fairly complete and accurate in all regions. The results in all regions were robust, with Massachusetts showing the best fits (R2 0.93). Results in Bern (R2 0.78) and Beer-Sheva (R2 0.77) were only slightly lower. We conclude by suggesting that OSM data can be used reliably in environmental assessment studies.


2019 ◽  
Vol 8 (12) ◽  
pp. 552 ◽  
Author(s):  
Juan José Ruiz-Lendínez ◽  
Francisco Javier Ariza-López ◽  
Manuel Antonio Ureña-Cámara

Point-based standard methodologies (PBSM) suggest using ‘at least 20’ check points in order to assess the positional accuracy of a certain spatial dataset. However, the reason for decreasing the number of checkpoints to 20 is not elaborated upon in the original documents provided by the mapping agencies which develop these methodologies. By means of theoretical analysis and experimental tests, several authors and studies have demonstrated that this limited number of points is clearly insufficient. Using the point-based methodology for the automatic positional accuracy assessment of spatial data developed in our previous study Ruiz-Lendínez, et al (2017) and specifically, a subset of check points obtained from the application of this methodology to two urban spatial datasets, the variability of National Standard for Spatial Data Accuracy (NSSDA) estimations has been analyzed according to sample size. The results show that the variability of NSSDA estimations decreases when the number of check points increases, and also that these estimations have a tendency to underestimate accuracy. Finally, the graphical representation of the results can be employed in order to give some guidance on the recommended sample size when PBSMs are used.


2019 ◽  
Vol 25 (1) ◽  
Author(s):  
Leandro Luiz Silva de França ◽  
Alex de Lima Teodoro da Penha ◽  
João Alberto Batista de Carvalho

Abstract This paper presents a comparative study between the absolute and relative methods for altimetric positional accuracy of Digital Elevation Models (DEM). For the theoretical basis of this research, the definitions of accuracy (exactness) and precision, as well the concepts related to absolute and relative positional accuracy were explored. In the case study, the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and the Shuttle Radar Topography Mission (SRTM) DEM were used. In the analysis of the absolute accuracy, 6,568 ground control points from GNSS orbital survey were used, collected through relative-static method. In the relative accuracy, it was used as reference DEM with spatial resolution of 5 meters generated by stereophotogrammetrical process for the Mapping Project of Bahia (Brazil). It was concluded that, once the accuracy of the reference DEM is better than the other two evaluated DEM, the results of the classification for the PEC-PCD for the relative evaluation are equal to or better than the absolute evaluation results, with the advantage to being able to verify the pixel population of the evaluated models, which makes it possible to identify outliers, distortions and displacements, including delimiting regions, which is much less likely with a limited set of control points.


2021 ◽  
Vol 10 (7) ◽  
pp. 430
Author(s):  
Juan J. Ruiz-Lendínez ◽  
Manuel A. Ureña-Cámara ◽  
José L. Mesa-Mingorance ◽  
Francisco J. Quesada-Real

There are many studies related to Imagery Segmentation (IS) in the field of Geographic Information (GI). However, none of them address the assessment of IS results from a positional perspective. In a field in which the positional aspect is critical, it seems reasonable to think that the quality associated with this aspect must be controlled. This paper presents an automatic positional accuracy assessment (PAA) method for assessing this quality component of the regions obtained by means of the application of a textural segmentation algorithm to a Very High Resolution (VHR) aerial image. This method is based on the comparison between the ideal segmentation and the computed segmentation by counting their differences. Therefore, it has the same conceptual principles as the automatic procedures used in the evaluation of the GI´s positional accuracy. As in any PAA method, there are two key aspects related to the sample that were addressed: (i) its size—specifically, its influence on the uncertainty of the estimated accuracy values—and (ii) its categorization. Although the results obtained must be taken with caution, they made it clear that automatic PAA procedures, which are mainly applied to carry out the positional quality assessment of cartography, are valid for assessing the positional accuracy reached using other types of processes. Such is the case of the IS process presented in this study.


2021 ◽  
Author(s):  
Michael Enzo Testaguzza

This report analyzes the governance of large scale public transit infrastructure planning in the GTA. To accomplish this goal a comparative case study was carried out of the two most recent large scale public transit infrastructure provision plans in Toronto, the Network 2011 plan, and following iterations; and the Transit City aspects of the Big Move plan and subsequent iterations. Each case study consists of (1) a review of the history of each plan and (2) a review of the efficiency of the many iterations of the original plan within each case study. Through analysis of this data several characteristics of governance were associated with movement towards better and worse iterations from an efficiency perspective. These characteristics were used to inform recommendations regarding the future of transportation governance in the GTA.


2016 ◽  
Vol 64 (4) ◽  
pp. 799-805
Author(s):  
A. Doskocz ◽  
W. Rejchel

Abstract Digital map data sets (or geo-databases) are an important part of the spatial data infrastructure (SDI) of the European Community. Different methods of producing large-scale map data are described in the paper, and the aim is to compare the accuracy of these methods. Our analysis is based on statistical tools belonging to the multiple comparisons theory. The first method is the well-known analysis of variance (ANOVA), and the second one is the rank-based method. The latter approach, which is rarely used in geodetic research, allows us to determine the order of the considered methods with respect to the positional accuracy of digital map data that they produce. Using this approach, one can identify the least accurate set of map data or a fragment of a map that should be updated by a new direct survey. The rank-based methods can also be rather easily applied to other technical (engineering) disciplines, e.g. geodesy and cartography.


2021 ◽  
Vol 10 (5) ◽  
pp. 289
Author(s):  
Juan José Ruiz-Lendínez ◽  
Francisco Javier Ariza-López ◽  
Manuel Antonio Ureña-Cámara

The continuous development of machine learning procedures and the development of new ways of mapping based on the integration of spatial data from heterogeneous sources have resulted in the automation of many processes associated with cartographic production such as positional accuracy assessment (PAA). The automation of the PAA of spatial data is based on automated matching procedures between corresponding spatial objects (usually building polygons) from two geospatial databases (GDB), which in turn are related to the quantification of the similarity between these objects. Therefore, assessing the capabilities of these automated matching procedures is key to making automation a fully operational solution in PAA processes. The present study has been developed in response to the need to explore the scope of these capabilities by means of a comparison with human capabilities. Thus, using a genetic algorithm (GA) and a group of human experts, two experiments have been carried out: (i) to compare the similarity values between building polygons assigned by both and (ii) to compare the matching procedure developed in both cases. The results obtained showed that the GA—experts agreement was very high, with a mean agreement percentage of 93.3% (for the experiment 1) and 98.8% (for the experiment 2). These results confirm the capability of the machine-based procedures, and specifically of GAs, to carry out matching tasks.


2014 ◽  
Vol 49 (2) ◽  
pp. 101-106 ◽  
Author(s):  
Ashraf Farah ◽  
Dafer Algarni

ABSTRACT Google Earth is a virtual globe, map and geographical information program that is controlled by Google corporation. It maps the Earth by the superimposition of images obtained from satellite imagery, aerial photography and GIS 3D globe. With millions of users all around the globe, GoogleEarth® has become the ultimate source of spatial data and information for private and public decision-support systems besides many types and forms of social interactions. Many users mostly in developing countries are also using it for surveying applications, the matter that raises questions about the positional accuracy of the Google Earth program. This research presents a small-scale assessment study of the positional accuracy of GoogleEarth® Imagery in Riyadh; capital of Kingdom of Saudi Arabia (KSA). The results show that the RMSE of the GoogleEarth imagery is 2.18 m and 1.51 m for the horizontal and height coordinates respectively.


Sign in / Sign up

Export Citation Format

Share Document