scholarly journals The Spatial-Comprehensiveness (S-COM) Index: Identifying Optimal Spatial Extents in Volunteered Geographic Information Point Datasets

2020 ◽  
Vol 9 (9) ◽  
pp. 497
Author(s):  
Haydn Lawrence ◽  
Colin Robertson ◽  
Rob Feick ◽  
Trisalyn Nelson

Social media and other forms of volunteered geographic information (VGI) are used frequently as a source of fine-grained big data for research. While employing geographically referenced social media data for a wide array of purposes has become commonplace, the relevant scales over which these data apply to is typically unknown. For researchers to use VGI appropriately (e.g., aggregated to areal units (e.g., neighbourhoods) to elicit key trend or demographic information), general methods for assessing the quality are required, particularly, the explicit linkage of data quality and relevant spatial scales, as there are no accepted standards or sampling controls. We present a data quality metric, the Spatial-comprehensiveness Index (S-COM), which can delineate feasible study areas or spatial extents based on the quality of uneven and dynamic geographically referenced VGI. This scale-sensitive approach to analyzing VGI is demonstrated over different grains with data from two citizen science initiatives. The S-COM index can be used both to assess feasible study extents based on coverage, user-heterogeneity, and density and to find feasible sub-study areas from a larger, indefinite area. The results identified sub-study areas of VGI for focused analysis, allowing for a larger adoption of a similar methodology in multi-scale analyses of VGI.

2021 ◽  
Author(s):  
Abdullatif Alyaqout ◽  
T. Edwin Chow ◽  
Alexander Savelyev

Abstract The primary objectives of this study are to 1) assess the quality of each volunteered geographic information (VGI) data modality (text, pictures, and videos), and 2) evaluate the quality of multiple VGI data sources, especially the multimedia that include pictures and videos, against synthesized water depth (WD) derived from remote sensing (RS) and authoritative data (e.g. stream gauges and depth grids). The availability of VGI, such as social media and crowdsourced data, empowered the researchers to monitor and model floods in near-real-time by integrating multi-sourced data available. Nevertheless, the quality of VGI sources and its reliability for flood monitoring (e.g. WD) is not well understood and validated by empirical data. Moreover, existing literature focuses mostly on text messages but not the multimedia nature of VGI. Therefore, this study measures the differences in synthesized WD from VGI modalities in terms of (1) spatial and (2) temporal variations, (3) against WD derived from RS, and (4) against authoritative data including (a) stream gauges and (b) depth grids. The results of the study show that there are significant differences in terms of spatial and temporal distribution of VGI modalities. Regarding VGI and RS comparison, the results show that there is a significant difference in WD between VGI and RS. In terms of VGI and authoritative data comparison, the analysis revealed that there is no significant difference in WD between VGI and stream gauges, while there is a significant difference between the depth grids and VGI.


2019 ◽  
Vol 8 (5) ◽  
pp. 232 ◽  
Author(s):  
Jennings Anderson ◽  
Dipto Sarkar ◽  
Leysia Palen

OpenStreetMap (OSM), the largest Volunteered Geographic Information project in the world, is characterized both by its map as well as the active community of the millions of mappers who produce it. The discourse about participation in the OSM community largely focuses on the motivations for why members contribute map data and the resulting data quality. Recently, large corporations including Apple, Microsoft, and Facebook have been hiring editors to contribute to the OSM database. In this article, we explore the influence these corporate editors are having on the map by first considering the history of corporate involvement in the community and then analyzing historical quarterly-snapshot OSM-QA-Tiles to show where and what these corporate editors are mapping. Cumulatively, millions of corporate edits have a global footprint, but corporations vary in geographic reach, edit types, and quantity. While corporations currently have a major impact on road networks, non-corporate mappers edit more buildings and points-of-interest: representing the majority of all edits, on average. Since corporate editing represents the latest stage in the evolution of corporate involvement, we raise questions about how the OSM community—and researchers—might proceed as corporate editing grows and evolves as a mechanism for expanding the map for multiple uses.


2020 ◽  
pp. 089443932092824 ◽  
Author(s):  
Michael J. Stern ◽  
Erin Fordyce ◽  
Rachel Carpenter ◽  
Melissa Heim Viox ◽  
Stuart Michaels ◽  
...  

Social media recruitment is no longer an uncharted avenue for survey research. The results thus far provide evidence of an engaging means of recruiting hard-to-reach populations. Questions remain, however, regarding whether the data collected using this method of recruitment produce quality data. This article assesses one aspect that may influence the quality of data gathered through nonprobability sampling using social media advertisements for a hard-to-reach sexual and gender minority youth population: recruitment design formats. The data come from the Survey of Today’s Adolescent Relationships and Transitions, which used a variety of forms of advertisements as survey recruitment tools on Facebook, Instagram, and Snapchat. Results demonstrate that design decisions such as the format of the advertisement (e.g., video or static) and the use of eligibility language on the advertisements impact the quality of the data as measured by break-off rates and the use of nonsubstantive responses. Additionally, the type of device used affected the measures of data quality.


2021 ◽  
pp. 227797522110118
Author(s):  
Amit K. Srivastava ◽  
Rajhans Mishra

Social media platforms have become very popular these days among individuals and organizations. On the one hand, organizations use social media as a potential tool to create awareness of their products among consumers, and on the other hand, social media data is useful to predict the national crisis, election polls, stock prediction, etc. However, nowadays, a debate is going on about the quality of data generated on social media platforms, whether it is relevant for prediction and generalization. The article discusses the relevance and quality of data obtained from social media in the context of research and development. Social media data quality issues may impact the generalizability and reproducibility of the results of the study. The paper explores possible reasons for quality issues in the data generated over social media platforms along with the suggestive measures to minimize them using the proposed social media data quality framework.


Author(s):  
M. Eshghi ◽  
A. A. Alesheikh

Recent advances in spatial data collection technologies and online services dramatically increase the contribution of ordinary people to produce, share, and use geographic information. Collecting spatial data as well as disseminating them on the internet by citizens has led to a huge source of spatial data termed as Volunteered Geographic Information (VGI) by Mike Goodchild. Although, VGI has produced previously unavailable data assets, and enriched existing ones. But its quality can be highly variable and challengeable. This presents several challenges to potential end users who are concerned about the validation and the quality assurance of the data which are collected. Almost, all the existing researches are based on how to find accurate VGI data from existing VGI data which consist of a) comparing the VGI data with the accurate official data, or b) in cases that there is no access to correct data; therefore, looking for an alternative way to determine the quality of VGI data is essential, and so forth. In this paper it has been attempt to develop a useful method to reach this goal. In this process, the positional accuracy of linear feature of Iran, Tehran OSM data have been analyzed.


Author(s):  
Kuo-Chih Hung ◽  
Mohsen Kalantari ◽  
Abbas Rajabifard

Volunteered geographic information (VGI) has the potential to provide much-needed information for emergency management stakeholders. However, stakeholders often lack scalability to identify useful and high-quality text content from the often-overwhelming amount of information. To solve this problem, most studies have concentrated on using text-related features in supervised learning models to classify text contents. This article proposes an assumption that the geographic attributes of VGI can be integrated into the model as features for enhancing the model's performance. To evaluate this assumption, the authors developed a case study based on VGI collected from two flooding events in Brisbane. They validated the accuracy of associated geographic coordinates and defined the geographic features relevant to the flood phenomenon. From their experiments, model based on this integrated method can have better performance in comparison with the model trained from the text-related features. The results suggest great potential for using the integrated method to harvest useful VGI for the needs of disaster management.


Crowdsourcing ◽  
2019 ◽  
pp. 1173-1201
Author(s):  
Hongyu Zhang ◽  
Jacek Malczewski

A large amount of crowd-sourced geospatial data have been created in recent years due to the interactivity of Web 2.0 and the availability of Global Positioning System (GPS). This geo-information is typically referred to as volunteered geographic information (VGI). OpenStreetMap (OSM) is a popular VGI platform that allows users to create or edit maps using GPS-enabled devices or aerial imageries. The issue of quality of geo-information generated by OSM has become a trending research topic because of the large size of the dataset and the inapplicability of Linus' Law in a geospatial context. This chapter systematically reviews the quality evaluation process of OSM, and demonstrates a case study of London, Canada for the assessment of completeness, positional accuracy and attribute accuracy. The findings of the quality evaluation can potentially serve as a guide of cartographic product selection and provide a better understanding of the development of OSM quality over geographic space and time.


Author(s):  
Hongyu Zhang ◽  
Jacek Malczewski

A large amount of crowd-sourced geospatial data have been created in recent years due to the interactivity of Web 2.0 and the availability of Global Positioning System (GPS). This geo-information is typically referred to as volunteered geographic information (VGI). OpenStreetMap (OSM) is a popular VGI platform that allows users to create or edit maps using GPS-enabled devices or aerial imageries. The issue of quality of geo-information generated by OSM has become a trending research topic because of the large size of the dataset and the inapplicability of Linus' Law in a geospatial context. This chapter systematically reviews the quality evaluation process of OSM, and demonstrates a case study of London, Canada for the assessment of completeness, positional accuracy and attribute accuracy. The findings of the quality evaluation can potentially serve as a guide of cartographic product selection and provide a better understanding of the development of OSM quality over geographic space and time.


Sign in / Sign up

Export Citation Format

Share Document