scholarly journals Open Street Map for crime and place

2020 ◽  
Author(s):  
Samuel Langton ◽  
Reka Solymosi

This chapter provides a framework to approach and meaningfully interpret open data. The chapter also offers a practical hands-on guide to demonstrate how to access, wrangle, and analyse different sources of open data in R in order to draw conclusions about crime and place.

Author(s):  
M. A. Brovelli ◽  
D. Oxoli ◽  
M. A. Zurbarán

During the past years Web 2.0 technologies have caused the emergence of platforms where users can share data related to their activities which in some cases are then publicly released with open licenses. Popular categories for this include community platforms where users can upload GPS tracks collected during slow travel activities (e.g. hiking, biking and horse riding) and platforms where users share their geolocated photos. However, due to the high heterogeneity of the information available on the Web, the sole use of these user-generated contents makes it an ambitious challenge to understand slow mobility flows as well as to detect the most visited locations in a region. Exploiting the available data on community sharing websites allows to collect near real-time open data streams and enables rigorous spatial-temporal analysis. This work presents an approach for collecting, unifying and analysing pointwise geolocated open data available from different sources with the aim of identifying the main locations and destinations of slow mobility activities. For this purpose, we collected pointwise open data from the Wikiloc platform, Twitter, Flickr and Foursquare. The analysis was confined to the data uploaded in Lombardy Region (Northern Italy) – corresponding to millions of pointwise data. Collected data was processed through the use of Free and Open Source Software (FOSS) in order to organize them into a suitable database. This allowed to run statistical analyses on data distribution in both time and space by enabling the detection of users’ slow mobility preferences as well as places of interest at a regional scale.


Author(s):  
Shenman Zhang ◽  
Pengjie Tao

Recent advances in open data initiatives allow us to free access to a vast amount of open LiDAR data in many cities. However, most of these open LiDAR data over cities are acquired by airborne scanning, where the points on façades are sparse or even completely missing due to the viewpoint and object occlusions in the urban environment. Integrating other sources of data, such as ground images, to complete the missing parts is an effective and practical solution. This paper presents an approach for improving open LiDAR data coverage on building façades by using point cloud generated from ground images. A coarse-to-fine strategy is proposed to fuse these two different sources of data. Firstly, the façade point cloud generated from terrestrial images is initially geolocated by matching the SFM camera positions to their GPS meta-information. Next, an improved Coherent Point Drift algorithm with normal consistency is proposed to accurately align building façades to open LiDAR data. The significance of the work resides in the use of 2D overlapping points on the outline of buildings instead of limited 3D overlap between the two point clouds and the achievement to a reliable and precise registration under possible incomplete coverage and ambiguous correspondence. Experiments show that the proposed approach can significantly improve the façades details of buildings in open LiDAR data and improving registration accuracy from up to 10 meters to less than half a meter compared to classic registration methods.


F1000Research ◽  
2019 ◽  
Vol 8 ◽  
pp. 1822 ◽  
Author(s):  
Ana Claudia Sima ◽  
Christophe Dessimoz ◽  
Kurt Stockinger ◽  
Monique Zahn-Zabal ◽  
Tarcisio Mendes de Farias

The increasing use of Semantic Web technologies in the life sciences, in particular the use of the Resource Description Framework (RDF) and the RDF query language SPARQL, opens the path for novel integrative analyses, combining information from multiple sources. However, analyzing evolutionary data in RDF is not trivial, due to the steep learning curve required to understand both the data models adopted by different RDF data sources, as well as the SPARQL query language. In this article, we provide a hands-on introduction to querying evolutionary data across multiple sources that publish orthology information in RDF, namely: The Orthologous MAtrix (OMA), the European Bioinformatics Institute (EBI) RDF platform, the Database of Orthologous Groups (OrthoDB) and the Microbial Genome Database (MBGD). We present four protocols in increasing order of complexity. In these protocols, we demonstrate through SPARQL queries how to retrieve pairwise orthologs, homologous groups, and hierarchical orthologous groups. Finally, we show how orthology information in different sources can be compared, through the use of federated SPARQL queries.


Author(s):  
M. Pincheira ◽  
E. Donini ◽  
R. Giaffreda ◽  
M. Vecchio

Abstract. Remote sensing considerably benefits from the fusion of open data from different sources, including far-range sensors mounted on satellites and short-range sensors on drones or Internet of Things devices. Open data is an emerging philosophy attracting an increasing number of data owners willing to share. However, most of the data owners are unknown and thus, untrustable, which makes shared data likely unreliable and possibly compromising associated outcomes. Currently, there exist tools that distribute open data, acting as intermediaries connecting data owners and users. However, these tools are managed by central authorities that set rules for data ownership, access, and integrity, limiting data owners and users. Therefore, a need emerges for a decentralized system to share and retrieve data without intermediaries limiting participants. Here, we propose a blockchain-based system to share and retrieve data without the need for a central authority. The proposed architecture (i) allows sharing data, (ii) maintains the data history (origin and updates), and (iii) allows retrieving and evaluating the data adding trustworthiness. To this end, the blockchain network enables the direct connection of data owners and users. Furthermore, blockchain automatically interacts with participants and keeps a transparent record of their actions. Hence, blockchain provides a decentralized database that enables trust among the participants without a central authority. We analyzed the potentials and critical issues of the architecture in a remote sensing use case of precision farming. The analysis shows that participants benefit from the properties of the blockchain in providing trusted data for remote sensing applications.


2017 ◽  
Vol 11 (1) ◽  
pp. 99-118 ◽  
Author(s):  
Julien Hivon ◽  
Ryad Titah

Purpose Open data initiatives represent a critical pillar of smart cities’ strategies but remain insufficiently and poorly understood. This paper aims to advance a conceptualization of citizen participation and investigates its effect on open data use at the municipal level. Design/methodology/approach Based on 14 semi-structured interviews with citizens involved in open data projects within the city of Montréal (Canada), the paper develops a research model linking the multidimensional construct of citizen participation with initial use of open data in municipalities. Findings The study shows that citizen participation is a key contributor to the use of open data through four distinct categories of participation, namely, hands-on activities, greater responsibility, better communication and improved relations between citizens and the open data portal development team. While electronic government research often views open data implementation as a top-down project, the current study demonstrates that citizens are central to the success of open data initiatives and shows how their role can be effectively leveraged across various dimensions of participation. Originality/value This paper proposes a conceptualization of citizen participation on open data use at the municipal level. Citizen participation is a found to be a key contributor to the use of open data through four distinct categories of participation, namely, hands-on activities, greater responsibility, better communication and improved relations between citizens and the open data portal development team. This paper demonstrates the critical role of citizen participation in open government.


2019 ◽  
Vol 11 (4) ◽  
pp. 420 ◽  
Author(s):  
Shenman Zhang ◽  
Pengjie Tao ◽  
Lei Wang ◽  
Yaolin Hou ◽  
Zhihua Hu

Recent open data initiatives allow free access to a vast amount of light detection and ranging (LiDAR) data in many cities. However, most open LiDAR data of cities are acquired by airborne scanning, where points on building façades are sparse or even completely missing due to occlusions in the urban environment, leading to the absence of façade details. This paper presents an approach for improving the LiDAR data coverage on building façades by using point cloud generated from ground images. A coarse-to-fine strategy is proposed to fuse these two-point clouds of different sources with very limited overlaps. First, the façade point cloud generated from ground images is leveled by adjusting the facade normal to perpendicular to the upright direction. Then leveling façade point cloud is geolocated by alignment between images GPS data and their structure from motion (SfM) coordinates. Next, a modified coherent point drift algorithm with (surface) normal consistency is proposed to accurately align the façade point cloud to the LiDAR data. The significance of this work resides in the use of 2D overlapping points on the building outlines instead of the limited 3D overlap between the two-point clouds. This way we can still achieve reliable and precise registration under incomplete coverage and ambiguous correspondence. Experiments show that the proposed approach can significantly improve the façade details in open LiDAR data, and achieve 2 to 10 times higher registration accuracy, when compared to classic registration methods.


Author(s):  
O. N. Shorin

Implementation of the project on semantic integration of bibliographic records has allowed to solve urgent problems: there is developed domain ontology and created modules of interaction with a variety of automated library information systems; bibliographic records converted from different formats into RDF, enriched using the information obtained from different sources, and released in accordance with the principles of Linked Open Data. Hand-ling one of the world’s largest arrays of bibliographic records required utilization of highly specialized protocols of access to information, high-performance processing algorithms and scalable storage solutions.


2020 ◽  
Author(s):  
Pedro Bernardo ◽  
Ismael Silva ◽  
Glívia Barbosa ◽  
Flávio Coutinho ◽  
Evandrinho Barros

The technological advances have made data sharing and knowledge generation possible in several areas. In order to support information extraction and knowledge generation, several datasets have been made publicly available, giving rise to the concept of open data. However, while such data are available, the processing, visualization, and analysis of them by society, in general, can be considered difficult tasks. Data are available to a great volume, in different files and formats, making it difficult to cross-reference and analyze them to obtain relevant information without the support of appropriate tools. Inspired by this scenario, this paper presents WikiOlapBase, a collaborative tool capable of processing, integrating and making feasible the analysis of open data from different sources, even by people without technical knowledge. WikiOlapBase contributes to the expansion of open data analysis, since it favors a greater information sharing and knowledge dissemination.


2019 ◽  
Vol 2 (1) ◽  
pp. 10 ◽  
Author(s):  
Darusalam Darusalam ◽  
Jamaliah Said ◽  
Normah Omar ◽  
Marijn Janssen ◽  
Kazi Sohag

Corruption occurs in many places within the government. To tackle the issue, open data can be used as one of the tools in creating more insight into the government. The premise of this paper is to support the notion that data opening can bring up new ways of fighting corruption. The current paper aimed at investigating how open data can be employed to detect corruption. This open data is trivial due to challenges like information asymmetry among stakeholders, data might only be opened partly, different sources of data need to be combined, and data might not be easy to use, might be biased or even manipulated. The study was conducted using a literature review approach. The reviews implied that corruption can be detected using Open Government Data, Thus, by conducting the open data technique within the government, the public could monitor the activities of the governments. The practical contribution of this paper is expected to assist the government in detecting corruption by using a data-driven approach. Furthermore, the scientific contribution will originate from the development of a framework reference architecture to uncover corruption cases.


Author(s):  
M. A. Brovelli ◽  
D. Oxoli ◽  
M. A. Zurbarán

During the past years Web 2.0 technologies have caused the emergence of platforms where users can share data related to their activities which in some cases are then publicly released with open licenses. Popular categories for this include community platforms where users can upload GPS tracks collected during slow travel activities (e.g. hiking, biking and horse riding) and platforms where users share their geolocated photos. However, due to the high heterogeneity of the information available on the Web, the sole use of these user-generated contents makes it an ambitious challenge to understand slow mobility flows as well as to detect the most visited locations in a region. Exploiting the available data on community sharing websites allows to collect near real-time open data streams and enables rigorous spatial-temporal analysis. This work presents an approach for collecting, unifying and analysing pointwise geolocated open data available from different sources with the aim of identifying the main locations and destinations of slow mobility activities. For this purpose, we collected pointwise open data from the Wikiloc platform, Twitter, Flickr and Foursquare. The analysis was confined to the data uploaded in Lombardy Region (Northern Italy) – corresponding to millions of pointwise data. Collected data was processed through the use of Free and Open Source Software (FOSS) in order to organize them into a suitable database. This allowed to run statistical analyses on data distribution in both time and space by enabling the detection of users’ slow mobility preferences as well as places of interest at a regional scale.


Sign in / Sign up

Export Citation Format

Share Document