scholarly journals PRODUCING AND VISUALIZING A COUNTRY-WIDE 3D DATA REPOSITORY IN FINLAND

Author(s):  
H. Visuri ◽  
J. Jokela ◽  
N. Mesterton ◽  
P. Latvala ◽  
T. Aarnio

<p><strong>Abstract.</strong> The amount and the quality of 3D spatial data are growing constantly, but the data is collected and stored in a distributed fashion by various data collecting organizations. This may lead to problems regarding interoperability, usability and availability of the data. Traditionally, national spatial data infrastructures have focused on 2D data, but recently there has been great progress towards introducing also 3D spatial data in governmental services. This paper studies the process of creating a country-wide 3D data repository in Finland and visualizing it for the public by using an open source map application. The 3D spatial data is collected and stored into one national topographic database that provides information for the whole society. The data quality control process is executed with an automated data quality module as a part of the import process to the database. The 3D spatial data is served from the database for the visualization via 3D service and the visualization is piloted in the National Geoportal.</p>

Author(s):  
Felipe Simoes ◽  
Donat Agosti ◽  
Marcus Guidoti

Automatic data mining is not an easy task and its success in the biodiversity world is deeply tied to the standardization and consistency of scientific journals' layout structure. The various formatting styles found in the over 500 million pages of published biodiversity information (Kalfatovich 2010), pose a remarkable challenge towards the goal of automating the liberation of data currently trapped on the printed page. Regular expressions and other pattern-recognition strategies invariably fail to cope with this diverse landscape of academic publishing. Challenges such as incomplete data and taxonomic uncertainty add several additional layers of complexity. However, in the era of big data, the liberation of all the different facts contained in biodiversity literature is of crucial importance. Plazi tackles this daunting task by providing workflows and technology to automatically process biodiversity publications and annotate the information therein, all within the principles of FAIR (findable, accessible, interoperable, and reusable) data usage (Agosti and Egloff 2009). It uses the concept of taxonomic treatments (Catapano 2019) as the most fundamental unit in biodiversity literature, to provide a framework that reflects the reality of taxonomic data for linking the different pieces of information contained in these taxonomic treatments. Treatment citations, composed of a taxonomic name and a bibliographic reference, and material citations carrying all specimen-related information are additional conceptual cornerstones for this framework. The resulting enhanced data are added to TreatmentBank. Figures and treatments are made Findable, Accessible, Interoperable and Reuseable (FAIR) by depositing them including specific metadata to the Biodiversity Literature Repository community (BLR) at the European Organization for Nuclear Research (CERN) repository Zenodo, and pushed to GBIF. The automation, however, is error prone due to the constraints explained above. In order to cope with this remarkable task without compromising data quality, Plazi has established a quality control process, based on logical rules that check the components of the extracted document raising errors in four different levels of severity. These errors are also used in a data transit control mechanism, “the gatekeeper”, which blocks certain data transits to create deposits (e.g., BLR) or reuse of data (e.g., GBIF) in the presence of specific errors. Finally, a set of automatic notifications were included in the plazi/community Github repository, in order to provide a channel that empowers external users to report data issues directly to a dedicated team of data miners, which will in turn and in a timely manner, fix these issues, improving data quality on demand. In this talk, we aim to explain Plazi’s internal quality control process and phases, the data transits that are potentially affected, as well as statistics on the most common issues raised by this automated endeavor and how we use the generated data to continuously improve this important step in Plazi's workflow.


Smart Cities ◽  
2019 ◽  
Vol 2 (1) ◽  
pp. 106-117
Author(s):  
Chengxi Siew ◽  
Pankaj Kumar

Spatial Data Infrastructures (SDIs) are frequently used to exchange 2D & 3D data, in areas such as city planning, disaster management, urban navigation and many more. City Geography Mark-up Language (CityGML), an Open Geospatial Consortium (OGC) standard has been developed for the storage and exchange of 3D city models. Due to its encoding in XML based format, the data transfer efficiency is reduced which leads to data storage issues. The use of CityGML for analysis purposes is limited due to its inefficiency in terms of file size and bandwidth consumption. This paper introduces XML based compression technique and elaborates how data efficiency can be achieved with the use of schema-aware encoder. We particularly present CityGML Schema Aware Compressor (CitySAC), which is a compression approach for CityGML data transaction within SDI framework. Our test results show that the encoding system produces smaller file size in comparison with existing state-of-the-art compression methods. The encoding process significantly reduces the file size up to 7–10% of the original data.


2019 ◽  
Vol 11 (17) ◽  
pp. 1957 ◽  
Author(s):  
Jingya Yan ◽  
Siow Jaw ◽  
Kean Soon ◽  
Andreas Wieser ◽  
Gerhard Schrotter

With the pressure of the increasing density of urban areas, some public infrastructures are moving to the underground to free up space above, such as utility lines, rail lines and roads. In the big data era, the three-dimensional (3D) data can be beneficial to understand the complex urban area. Comparing to spatial data and information of the above ground, we lack the precise and detailed information about underground infrastructures, such as the spatial information of underground infrastructure, the ownership of underground objects and the interdependence of infrastructures in the above and below ground. How can we map reliable 3D underground utility networks and use them in the land administration? First, to explain the importance of this work and find a possible solution, this paper observes the current issues of the existing underground utility database in Singapore. A framework for utility data governance is proposed to manage the work process from the underground utility data capture to data usage. This is the backbone to support the coordination of different roles in the utility data governance and usage. Then, an initial design of the 3D underground utility data model is introduced to describe the 3D geometric and spatial information about underground utility data and connect it to the cadastral parcel for land administration. In the case study, the newly collected data from mobile Ground Penetrating Radar is integrated with the existing utility data for 3D modelling. It is expected to explore the integration of new collected 3D data, the existing 2D data and cadastral information for land administration of underground utilities.


Author(s):  
Elzbieta Malinowski

The increasing popularity of spatial data opens up the possibility to include it in decision-making processes in order to help discover existing interrelationships between facts that might otherwise be difficult to describe or explain. To achieve this goal, Spatial Data Infrastructures (SDIs) are seen as a platform to provide and share spatial and conventional data, especially among public institutions. However, SDI initiatives face many problems due to the lack of standards for data publications, the heterogeneity of participants that build and use the system, and participants’ different backgrounds, level of preparation, and perception about the objective that SDIs should fulfill. Furthermore, to obtain better benefits from using spatial data, non-expert users in geo-concepts (i.e., users unfamiliar with complex concepts related to spatial data manipulation) should count on a variety of tools that hide spatial data complexity and facilitate knowledge generation with the goal of shifting from traditional spatial data sharing to an intelligent level. In this chapter, the authors refer to different issues related to knowledge generation from spatial data in order to support decision-making processes with an emphasis on public institutions. They look for the answers to several aspects: what tools are available for non-expert users to handle spatial data, who will provide spatial and related conventional data to stakeholders interested in analyzing them, and how to ensure data quality.


2016 ◽  
Vol 2016 ◽  
pp. 1-7 ◽  
Author(s):  
Xinlong Liu ◽  
Weimin Huang ◽  
Eric W. Gill

A shadowing-analysis-based algorithm is modified to estimate significant wave height from shipborne X-band nautical radar images. Shadowed areas are first extracted from the image through edge detection. Smith’s function fit is then applied to illumination ratios to derive the root mean square (RMS) surface slope. From the RMS surface slope and the mean wave period, the significant wave height is estimated. A data quality control process is implemented to exclude rain-contaminated and low-backscatter images. A smoothing scheme is applied to the gray scale intensity histogram of edge pixels to improve the accuracy of the shadow threshold determination. Rather than a single full shadow image, a time sequence of shadow image subareas surrounding the upwind direction is used to calculate the average RMS surface slope. It has been found that the wave height retrieved from the modified algorithm is underestimated under rain and storm conditions and overestimated for cases with low wind speed. The modified method produces promising results by comparing radar-derived wave heights with buoy data, and the RMS difference is found be 0.59 m.


2011 ◽  
Vol 49 (1) ◽  
pp. 79 ◽  
Author(s):  
Jennifer A. Chandler ◽  
Katherine Levitt

This article discusses whether and when a private provider of spatial data may be liable to pay for damages resulting from physical injury that occurs due to reliance on erroneous spatial data. The existing case law supports the view that some courts will approach harm due to errors in spatial datasets that give rise to physical harm using principles applicable to defective products, while others regard these errors as negligent misrepresentation. This article analyzes the duty to warn and spatial data in two parts. First, it provides an overview of the general problem of spatial data quality and its growing importance in light of internet dissemination to the public. Second, it sketches out the basic rules in the three main subdivisions of Canadian product liability law (manufacturing defects, design defects, and failures to warn of risks associated with products) and applies them to the context of broadly disseminated spatial data.


2014 ◽  
Vol 42 (2) ◽  
pp. 128-142 ◽  
Author(s):  
Annick Cros ◽  
Ruben Venegas-Li ◽  
Shwu Jiau Teoh ◽  
Nate Peterson ◽  
Wen Wen ◽  
...  

Water ◽  
2021 ◽  
Vol 13 (20) ◽  
pp. 2820
Author(s):  
Gimoon Jeong ◽  
Do-Guen Yoo ◽  
Tae-Woong Kim ◽  
Jin-Young Lee ◽  
Joon-Woo Noh ◽  
...  

In our intelligent society, water resources are being managed using vast amounts of hydrological data collected through telemetric devices. Recently, advanced data quality control technologies for data refinement based on hydrological observation history, such as big data and artificial intelligence, have been studied. However, these are impractical due to insufficient verification and implementation periods. In this study, a process to accurately identify missing and false-reading data was developed to efficiently validate hydrological data by combining various conventional validation methods. Here, false-reading data were reclassified into suspected and confirmed groups by combining the results of individual validation methods. Furthermore, an integrated quality control process that links data validation and reconstruction was developed. In particular, an iterative quality control feedback process was proposed to achieve highly reliable data quality, which was applied to precipitation and water level stations in the Daecheong Dam Basin, South Korea. The case study revealed that the proposed approach can improve the quality control procedure of hydrological database and possibly be implemented in practice.


2018 ◽  
Vol 10 (3) ◽  
pp. 1655-1672 ◽  
Author(s):  
Marco A. Hernández-Henríquez ◽  
Aseem R. Sharma ◽  
Mark Taylor ◽  
Hadleigh D. Thompson ◽  
Stephen J. Déry

Abstract. This article presents the development of a sub-hourly database of hydrometeorological conditions collected in British Columbia's (BC's) Cariboo Mountains and surrounding area extending from 2006 to present. The Cariboo Alpine Mesonet (CAMnet) forms a network of 11 active hydrometeorological stations positioned at strategic locations across mid- to high elevations of the Cariboo Mountains. This mountain region spans 44 150 km2, forming the northern extension of the Columbia Mountains. Deep fjord lakes along with old-growth western redcedar and hemlock forests reside in the lower valleys, montane forests of Engelmann spruce, lodgepole pine and subalpine fir permeate the mid-elevations, while alpine tundra, glaciers and several large ice fields cover the higher elevations. The automatic weather stations typically measure air and soil temperature, relative humidity, atmospheric pressure, wind speed and direction, rainfall and snow depth at 15 min intervals. Additional measurements at some stations include shortwave and longwave radiation, near-surface air, skin, snow, or water temperature, and soil moisture, among others. Details on deployment sites, the instrumentation used and its precision, the collection and quality control process are provided. Instructions on how to access the database at Zenodo, an online public data repository, are also furnished (https://doi.org/10.5281/zenodo.1195043). Information on some of the challenges and opportunities encountered in maintaining continuous and homogeneous time series of hydrometeorological variables and remote field sites is provided. The paper also summarizes ongoing plans to expand CAMnet to better monitor atmospheric conditions in BC's mountainous terrain, efforts to push data online in (near-)real time, availability of ancillary data and lessons learned thus far in developing this mesoscale network of hydrometeorological stations in the data-sparse Cariboo Mountains.


2020 ◽  
Vol 9 (3) ◽  
pp. 176 ◽  
Author(s):  
Alexander Kotsev ◽  
Marco Minghini ◽  
Robert Tomas ◽  
Vlado Cetl ◽  
Michael Lutz

The availability of timely, accessible and well documented data plays a central role in the process of digital transformation in our societies and businesses. Considering this, the European Commission has established an ambitious agenda that aims to leverage on the favourable technological and political context and build a society that is empowered by data-driven innovation. Within this context, geospatial data remains critically important for many businesses and public services. The process of establishing Spatial Data Infrastructures (SDIs) in response to the legal provisions of the European Union INSPIRE Directive has a long history. While INSPIRE focuses mainly on ’unlocking’ data from the public sector, there is need to address emerging technological trends, and consider the role of other actors such as the private sector and citizen science initiatives. The objective of this paper, given those bounding conditions is twofold. Firstly, we position SDI-related developments in Europe within the broader context of the current political and technological scenery. In doing so, we pay particular attention to relevant technological developments and emerging trends that we see as enablers for the evolution of European SDIs. Secondly, we propose a high level concept of a pan-European (geo)data space with a 10-year horizon in mind. We do this by considering today’s technology while trying to adopt an evolutionary approach with developments that are incremental to contemporary SDIs.


Sign in / Sign up

Export Citation Format

Share Document