scholarly journals CitySAC: A Query-Able CityGML Compression System

Smart Cities ◽  
2019 ◽  
Vol 2 (1) ◽  
pp. 106-117
Author(s):  
Chengxi Siew ◽  
Pankaj Kumar

Spatial Data Infrastructures (SDIs) are frequently used to exchange 2D & 3D data, in areas such as city planning, disaster management, urban navigation and many more. City Geography Mark-up Language (CityGML), an Open Geospatial Consortium (OGC) standard has been developed for the storage and exchange of 3D city models. Due to its encoding in XML based format, the data transfer efficiency is reduced which leads to data storage issues. The use of CityGML for analysis purposes is limited due to its inefficiency in terms of file size and bandwidth consumption. This paper introduces XML based compression technique and elaborates how data efficiency can be achieved with the use of schema-aware encoder. We particularly present CityGML Schema Aware Compressor (CitySAC), which is a compression approach for CityGML data transaction within SDI framework. Our test results show that the encoding system produces smaller file size in comparison with existing state-of-the-art compression methods. The encoding process significantly reduces the file size up to 7–10% of the original data.

Author(s):  
H. Visuri ◽  
J. Jokela ◽  
N. Mesterton ◽  
P. Latvala ◽  
T. Aarnio

<p><strong>Abstract.</strong> The amount and the quality of 3D spatial data are growing constantly, but the data is collected and stored in a distributed fashion by various data collecting organizations. This may lead to problems regarding interoperability, usability and availability of the data. Traditionally, national spatial data infrastructures have focused on 2D data, but recently there has been great progress towards introducing also 3D spatial data in governmental services. This paper studies the process of creating a country-wide 3D data repository in Finland and visualizing it for the public by using an open source map application. The 3D spatial data is collected and stored into one national topographic database that provides information for the whole society. The data quality control process is executed with an automated data quality module as a part of the import process to the database. The 3D spatial data is served from the database for the visualization via 3D service and the visualization is piloted in the National Geoportal.</p>


2019 ◽  
Vol 29 (3) ◽  
pp. 79-90
Author(s):  
S. A. Yamashkin ◽  
A. A. Yamashkin ◽  
S. A. Fedosin

The article includes the issues of design, development and introduction of project-oriented spatial data infrastructures (SDIs) that build the information space to solve pressing challenges in economy, ecology, social services, in the field of preparation of pre-investment, urban planning, pre-project, project documentation, and natural disaster forecasting.It also provides an overview of a historical development of spatial data infrastructures in Russia and in the world. Based on an analysis of a historical landscape within the challenging area, authors have identified the following system components of SDIs: users and professionals, data, technologies, standards, regulatory frameworks, and institutional procedures. There is a proposed platform solution architecture to build SDI, summarized in a form of a structure-component scheme. It rests upon the hypothesis that in order to optimize spatial data storage and application-related processes, the project-oriented SDI needs to include loosely bound and closely bound subsystems for spatial data storage (cloud or local storages), analysis and synthesis modules, as well as modules for visualization and distribution of spatial data (as geoportal systems).


Author(s):  
Ragmi Mustafa ◽  
Basri Ahmedi ◽  
Kujtim Mustafa

Nowadays we have so much images provided by different types of machines, while we need to store them or transfer to other devices or via internet, we need to compress them because the images usually have large amount of size. Compressing them reduces time for transferring files. The compression can be done with different methods and software in order to reduce their capacity expressed in megabytes as much as tens of hundreds of gigabytes for more files. It is well known that the speed of information transmission depends mainly on its quantity or the capacity of the information package. Image compression is a very important task for data transfer and data storage, especially nowadays because of the development of many image acquisition devices. If there is no compression technique used on these data, they may occupy immense space of memory, or render difficult data transmission. Artificial Neural Networks (ANN) have demonstrated good capacities for lossy image compression. The ANN algorithm we investigate is BEP-SOFM, which uses a Backward Error Propagation algorithm to quickly obtain the initial weights, and then these weights are used to speed up the training time required by the Self-Organizing Feature Map algorithm. In order to obtain these initial weights with the BEP algorithm, we analyze the hierarchical approach, which consists in preparing the image to compress using the quadtree data structure by segmenting the image into blocks of different sizes. Small blocks are used to represent image areas with large-scale details, while the larger ones represent the areas that have a small number of observed details. Tests demonstrate that the approach of quadtree segmentation quickly leads to the initial weights using the BEP algorithm.


Author(s):  
Y. S. Huang ◽  
G. Q. Zhou ◽  
T. Yue ◽  
H. B. Yan ◽  
W. X. Zhang ◽  
...  

Abstract. Although contemporary geospatial science has made great progress, spatial data fusion of vector and raster data is still a problem in the geoinformation science environment. In order to solve the problem, this paper proposes a method which merges vector and raster data. Firstly, the row and column numbers of the raster data, and the X, Y values of the vector data are represented by Morton code in the C++ environment, respectively. Secondly, we establish the the raster data table and the vector data table in the Oracle database to store the vector data and the raster data. Third, this paper uses the minimum selection bounding box method to extract the top data of the building model. Finally, we divide the vector and raster data into four steps to obtain the fusion data table, and we call the fusion data in the database for 3D visualization. This method compresses the size of data of the original data, and simultaneously divides the data into three levels, which not only solves the problem of data duplication storage and unorganized storage, but also can realize vector data storage and the raster data storage in the same database at the same time. Thus, the fusion original orthophoto data contains the gray values of building roofs and the elevation data, which can improve the availability of vector data and the raster data in the 3D Visualization application.


Author(s):  
Hikka Sartika ◽  
Taronisokhi Zebua

Storage space required by an application is one of the problems on smartphones. This problem can result in a waste of storage space because not all smartphones have a very large storage capacity. One application that has a large file size is the RPUL application and this application is widely accessed by students and the general public. Large file size is what often causes this application can not run effectively on smartphones. One solution that can be used to solve this problem is to compress the application file, so that the size of the storage space needed in the smartphone is much smaller. This study describes how the application of the elias gamma code algorithm as one of the compression technique algorithms to compress the RPUL application database file. This is done so that the RPUL application can run effectively on a smartphone after it is installed. Based on trials conducted on 64 bit of text as samples in this research it was found that compression based on the elias gamma code algorithm is able to compress text from a database file with a ratio of compression is 2 bits, compression ratio is 50% with a redundancy is 50%. Keywords: Compression, RPUL, Smartphone, Elias Gamma Code


2021 ◽  
Vol 14 ◽  
pp. 117862212110092
Author(s):  
Michele M Tobias ◽  
Alex I Mandel

Many studies in air, soil, and water research involve observations and sampling of a specific location. Knowing where studies have been previously undertaken can be a valuable addition to future research, including understanding the geographical context of previously published literature and selecting future study sites. Here, we introduce Literature Mapper, a Python QGIS plugin that provides a method for creating a spatial bibliography manager as well as a specification for storing spatial data in a bibliography manager. Literature Mapper uses QGIS’ spatial capabilities to allow users to digitize and add location information to a Zotero library, a free and open-source bibliography manager on basemaps or other geographic data of the user’s choice. Literature Mapper enhances the citations in a user’s online Zotero database with geo-locations by storing spatial coordinates as part of traditional citation entries. Literature Mapper receives data from and sends data to the user’s online database via Zotero’s web API. Using Zotero as the backend data storage, Literature Mapper benefits from all of its features including shared citation Collections, public sharing, and an open web API usable by additional applications, such as web mapping libraries. To evaluate Literature Mapper’s ability to provide insights into the spatial distribution of published literature, we provide a case study using the tool to map the study sites described in academic publications related to the biogeomorphology of California’s coastal strand vegetation, a line of research in which air movement, soil, and water are all driving factors. The results of this exercise are presented in static and web map form. The source code for Literature Mapper is available in the corresponding author’s GitHub repository: https://github.com/MicheleTobias/LiteratureMapper


Biomimetics ◽  
2021 ◽  
Vol 6 (2) ◽  
pp. 32
Author(s):  
Tomasz Blachowicz ◽  
Jacek Grzybowski ◽  
Pawel Steblinski ◽  
Andrea Ehrmann

Computers nowadays have different components for data storage and data processing, making data transfer between these units a bottleneck for computing speed. Therefore, so-called cognitive (or neuromorphic) computing approaches try combining both these tasks, as is done in the human brain, to make computing faster and less energy-consuming. One possible method to prepare new hardware solutions for neuromorphic computing is given by nanofiber networks as they can be prepared by diverse methods, from lithography to electrospinning. Here, we show results of micromagnetic simulations of three coupled semicircle fibers in which domain walls are excited by rotating magnetic fields (inputs), leading to different output signals that can be used for stochastic data processing, mimicking biological synaptic activity and thus being suitable as artificial synapses in artificial neural networks.


2016 ◽  
Vol 7 (3) ◽  
pp. 1-37
Author(s):  
Willington Siabato ◽  
Javier Moya-Honduvilla ◽  
Miguel Ángel Bernabé-Poveda

The way aeronautical information is managed and disseminated must be modernized. Current aeronautical information services (AIS) methods for storing, publishing, disseminating, querying, and updating the volume of data required for the effective management of air traffic control have become obsolete. This does not contribute to preventing airspace congestion, which turns into a limiting factor for economic growth and generates negative effects on the environment. Owing to this, some work plans for improving AIS and air traffic flow focus on data and services interoperability to allow an efficient and coordinated use and exchange of aeronautical information. Geographic information technologies (GIT) and spatial data infrastructures (SDI) are comprehensive technologies upon which any service that integrates geospatial information can rely. The authors are working on the assumption that the foundations and underlying technologies of GIT and SDI can be applied to support aeronautical data and services, considering that aeronautical information contains a large number of geospatial components. This article presents the design, development, and implementation of a Web-based system architecture to evolve and enhance the use and management of aeronautical information in any context, e.g., in aeronautical charts on board, in control towers, and in aeronautical information services. After conducting a study into the use of aeronautical information, it was found that users demand specific requirements regarding reliability, flexibility, customization, integration, standardization, and cost reduction. These issues are not being addressed with existing systems and methods. A system compliant with geographic standards (OGC, ISO) and aeronautical regulations (ICAO, EUROCONTROL) and supported by a scalable and distributed Web architecture is proposed. This proposal would solve the shortcomings identified in the study and provide aeronautical information management (AIM) with new methods and strategies. In order to seek aeronautical data and services interoperability, a comprehensive aeronautical metadata profile has been defined. This proposal facilitates the use, retrieval, updating, querying, and editing of aeronautical information, as well as its exchange between different private and public institutions. The tests and validations have shown that the proposal is achievable.


Sign in / Sign up

Export Citation Format

Share Document