scholarly journals WEB SERVICE FOR POSITIONAL QUALITY ASSESSMENT: THE WPS TIER

Author(s):  
E. M. A. Xavier ◽  
F. J. Ariza-López ◽  
M. A. Ureña-Cámara

In the field of spatial data every day we have more and more information available, but we still have little or very little information about the quality of spatial data. We consider that the automation of the spatial data quality assessment is a true need for the geomatic sector, and that automation is possible by means of web processing services (WPS), and the application of specific assessment procedures. In this paper we propose and develop a WPS tier centered on the automation of the positional quality assessment. An experiment using the NSSDA positional accuracy method is presented. The experiment involves the uploading by the client of two datasets (reference and evaluation data). The processing is to determine homologous pairs of points (by distance) and calculate the value of positional accuracy under the NSSDA standard. The process generates a small report that is sent to the client. From our experiment, we reached some conclusions on the advantages and disadvantages of WPSs when applied to the automation of spatial data accuracy assessments.

2019 ◽  
pp. 469-487
Author(s):  
Musfira Jilani ◽  
Michela Bertolotto ◽  
Padraig Corcoran ◽  
Amerah Alghanim

Nowadays an ever-increasing number of applications require complete and up-to-date spatial data, in particular maps. However, mapping is an expensive process and the vastness and dynamics of our world usually render centralized and authoritative maps outdated and incomplete. In this context crowd-sourced maps have the potential to provide a complete, up-to-date, and free representation of our world. However, the proliferation of such maps largely remains limited due to concerns about their data quality. While most of the current data quality assessment mechanisms for such maps require referencing to authoritative maps, we argue that such referencing of a crowd-sourced spatial database is ineffective. Instead we focus on the use of machine learning techniques that we believe have the potential to not only allow the assessment but also to recommend the improvement of the quality of crowd-sourced maps without referencing to external databases. This chapter gives an overview of these approaches.


Author(s):  
Musfira Jilani ◽  
Michela Bertolotto ◽  
Padraig Corcoran ◽  
Amerah Alghanim

Nowadays an ever-increasing number of applications require complete and up-to-date spatial data, in particular maps. However, mapping is an expensive process and the vastness and dynamics of our world usually render centralized and authoritative maps outdated and incomplete. In this context crowd-sourced maps have the potential to provide a complete, up-to-date, and free representation of our world. However, the proliferation of such maps largely remains limited due to concerns about their data quality. While most of the current data quality assessment mechanisms for such maps require referencing to authoritative maps, we argue that such referencing of a crowd-sourced spatial database is ineffective. Instead we focus on the use of machine learning techniques that we believe have the potential to not only allow the assessment but also to recommend the improvement of the quality of crowd-sourced maps without referencing to external databases. This chapter gives an overview of these approaches.


Author(s):  
Syed Mustafa Ali ◽  
Farah Naureen ◽  
Arif Noor ◽  
Maged Kamel N. Boulos ◽  
Javariya Aamir ◽  
...  

Background Increasingly, healthcare organizations are using technology for the efficient management of data. The aim of this study was to compare the data quality of digital records with the quality of the corresponding paper-based records by using data quality assessment framework. Methodology We conducted a desk review of paper-based and digital records over the study duration from April 2016 to July 2016 at six enrolled TB clinics. We input all data fields of the patient treatment (TB01) card into a spreadsheet-based template to undertake a field-to-field comparison of the shared fields between TB01 and digital data. Findings A total of 117 TB01 cards were prepared at six enrolled sites, whereas just 50% of the records (n=59; 59 out of 117 TB01 cards) were digitized. There were 1,239 comparable data fields, out of which 65% (n=803) were correctly matched between paper based and digital records. However, 35% of the data fields (n=436) had anomalies, either in paper-based records or in digital records. 1.9 data quality issues were calculated per digital patient record, whereas it was 2.1 issues per record for paper-based record. Based on the analysis of valid data quality issues, it was found that there were more data quality issues in paper-based records (n=123) than in digital records (n=110). Conclusion There were fewer data quality issues in digital records as compared to the corresponding paper-based records. Greater use of mobile data capture and continued use of the data quality assessment framework can deliver more meaningful information for decision making.


Author(s):  
M. Meijer ◽  
L. A. E. Vullings ◽  
J. D. Bulens ◽  
F. I. Rip ◽  
M. Boss ◽  
...  

Although by many perceived as important, spatial data quality has hardly ever been taken centre stage unless something went wrong due to bad quality. However, we think this is going to change soon. We are more and more relying on data driven processes and due to the increased availability of data, there is a choice in what data to use. How to make that choice? We think spatial data quality has potential as a selection criterion. <br><br> In this paper we focus on how a workflow tool can help the consumer as well as the producer to get a better understanding about which product characteristics are important. For this purpose, we have developed a framework in which we define different roles (consumer, producer and intermediary) and differentiate between product specifications and quality specifications. A number of requirements is stated that can be translated into quality elements. We used case studies to validate our framework. This framework is designed following the fitness for use principle. Also part of this framework is software that in some cases can help ascertain the quality of datasets.


Author(s):  
M. Eshghi ◽  
A. A. Alesheikh

Recent advances in spatial data collection technologies and online services dramatically increase the contribution of ordinary people to produce, share, and use geographic information. Collecting spatial data as well as disseminating them on the internet by citizens has led to a huge source of spatial data termed as Volunteered Geographic Information (VGI) by Mike Goodchild. Although, VGI has produced previously unavailable data assets, and enriched existing ones. But its quality can be highly variable and challengeable. This presents several challenges to potential end users who are concerned about the validation and the quality assurance of the data which are collected. Almost, all the existing researches are based on how to find accurate VGI data from existing VGI data which consist of a) comparing the VGI data with the accurate official data, or b) in cases that there is no access to correct data; therefore, looking for an alternative way to determine the quality of VGI data is essential, and so forth. In this paper it has been attempt to develop a useful method to reach this goal. In this process, the positional accuracy of linear feature of Iran, Tehran OSM data have been analyzed.


Author(s):  
I. Maidaneh Abdi ◽  
A. Le Guilcher ◽  
A-M. Olteanu-Raimond

Abstract. Data quality assessment of OpenStreetMap (OSM) data can be carried out by comparing them with a reference spatial data (e.g authoritative data). However, in case of a lack of reference data, the spatial accuracy is unknown. The aim of this work is therefore to propose a framework to infer relative spatial accuracy of OSM data by using machine learning methods. Our approach is based on the hypothesis that there is a relationship between extrinsic and intrinsic quality measures. Thus, starting from a multi-criteria data matching, the process seeks to establish a statistical relationship between measures of extrinsic quality of OSM (i.e. obtained by comparison with reference spatial data) and the measures of intrinsic quality of OSM (i.e. OSM features themselves) in order to estimate extrinsic quality on an unevaluated OSM dataset. The approach was applied on OSM buildings. On our dataset, the resulting regression model predicts the values on the extrinsic quality indicators with 30% less variance than an uninformed predictor.


2019 ◽  
Vol 1 ◽  
pp. 1-8
Author(s):  
Vaclav Talhofer ◽  
Šárka Hošková-Mayerová

<p><strong>Abstract.</strong> Multi-criterial analysis is becoming one of the main methods for evaluation of influence of geographic environment on human activity, or human activity on geographic environment, respectively. Analysis results are often used in command and control systems, especially in armed forces and units of rescue systems. For analyses, digital geographic data – whose quality significantly influences the reached results – are used. Visualization of results of analyses in command and control systems are usually thematic layers over raster images of topographic maps. That is why this visualization must correspond to cartographic principles used for the creation of thematic maps. The article presents problems that an analyst encounters within the evaluation of the quality of the used data, performance of the analysis itself as well as preparation of data files for their transfer and publishing in command and control systems.</p>


2007 ◽  
Vol 46 ◽  
pp. 189-194 ◽  
Author(s):  
Addy Pope ◽  
Tavi Murray ◽  
Adrian Luckman

AbstractPhotogrammetric digital elevation models (DEMs) are often used to derive and monitor surfaces in inaccessible areas. They have been used to monitor the spatial and temporal change of glacier surfaces in order to assess glacier response to climate change. However, deriving photogrammetric DEMs of steep mountainous topography where the surface is often obscured by regions of deep shadow and snow is particularly difficult. Assessing the quality of the derived surface can also be problematic, as high-accuracy ground-control points may be limited and poorly distributed throughout the modelled area. We present a method of assessing the quality of a derived surface through a detailed sensitivity analysis of the DEM collection parameters through a multiple input failure warning model (MIFWM). The variance of a DEM cell elevation is taken as an indicator of surface reliability allowing potentially unreliable areas to be excluded from further analysis. This analysis allows the user to place greater confidence in the remaining DEM. An example of this method is presented for a small mountain glacier in Svalbard, and the MIFWM is shown to label as unreliable more DEM cells over the entire DEM area, but fewer over the glacier surface, than other methods of data quality assessment. The MIFWM is shown to be an effective and easily used method for assessing DEM surface quality.


Sign in / Sign up

Export Citation Format

Share Document