The prediction of reservoir production based proxy model considering spatial data and vector data

Author(s):  
Kai Zhang ◽  
Xiaoya Wang ◽  
Xiaopeng Ma ◽  
Jian Wang ◽  
Yongfei Yang ◽  
...  
Keyword(s):  
2012 ◽  
Vol 263-266 ◽  
pp. 3274-3278
Author(s):  
Hui Ming Yu ◽  
Jian Zhong Guo ◽  
Yi Cheng ◽  
Qian Lou

Spatial data fusion is an important method of spatial data acquisition. The aim of multisource spatial data integration and fusion is to improve the information precision and information's utilization efficiency. Vector and raster are the two main spatial data structures. This article discusses vector data fusion from of data model fusion, semantic information fusion and coordinates unification, reviews the main methods of raster data fusion and discusses the key technologies of vector and raster data fusion, and proposes the future developments of spatial data fusion technique.


2014 ◽  
Vol 687-691 ◽  
pp. 1153-1156
Author(s):  
Shi Qing Dou ◽  
Xiao Yu Zhang

Data simplification is an important factor of the spatial data generalization, which is an effective way to improve rendering speed. This paper firstly introduces the algorithms classification of the spatial line vector data in two-dimensional environment, and then it emphatically summarizes and analyzes the advantages and disadvantages of the algorithms which can be used in the spatial line vector data simplification in the three dimensional environment. The three-dimensional Douglas-Peucker algorithm with a certain overall characteristics has wide application prospect. The simplified algorithms in 3D environment represent the development direction of the future. But at present, the existing data simplification algorithms in 3D environment are not mature enough, they all have certain advantages and disadvantages, this makes their use is limited by a certain extent. The application of these simplified algorithms in 2D and 3D is mostly on multi-resolution expression. Developing from 2D algorithm to the direction of 3D algorithm, it also lists many works and problems that need us to do and study in the future.


2021 ◽  
Vol 20 ◽  
pp. 89-96
Author(s):  
Piotr Cichociński

For several years GIS software users could use for any purpose a dataset being to some extent an alternative to both products offered by commercial providers and official databases. It is OpenStreetMap (OSM for short) – a worldwide spatial dataset, created and edited by interested individuals and available for use by anyone with no limitations. It is built on the basis of data recorded with consumer grade GPS receivers, obtained through vectorization of aerial photographs and from other usable sources, including even sketches made in the field. The collected information is stored in a central database, the content of which is not only presented on the website as a digital map, but also offered for download as vector data. Such data can be used for, among other things, performing various analyses based on road networks, of which the most frequently used is the function of determining the optimal route connecting selected locations. The results of such analyses can only be considered reliable if the data used are of adequate quality. As the OSM database is built by enthusiasts, no plans for its systematic development are formulated and there are no built-in quality control mechanisms. Therefore, the paper proposes methods and tools to verify the usefulness of the data collected so far, as well as to correct detected errors. It focuses on the following categories of geographic data quality: location accuracy, topological consistency and temporal validity. In addition, a problem with determining the length of individual road network segments was noticed, related to data acquisition methods and ways of recording the shape of lines. Therefore, in order to carry out the so-called route calibration, it was suggested to use kilometer and hectometer posts used in transportation networks, the locations of which are successively added to the OSM database. BADANIE UŻYTECZNOŚCI OTWARTYCH DANYCH PRZESTRZENNYCH DO ANALIZ OPARTYCH NA SIECIACH DROGOWYCH – NA PRZYKŁADZIE OPENSTREETMAP Od kilkunastu już lat użytkownicy oprogramowania GIS mogą używać do dowolnych celów zbioru danych będącego do pewnego stopnia alternatywą zarówno dla produktów oferowanych przez dostawców komercyjnych, jak i urzędowych baz danych. Jest nim OpenStreetMap (w skrócie OSM) – obejmujący cały świat zbiór danych przestrzennych, tworzony i edytowany przez zainteresowane osoby i dostępny do stosowania przez każdego chętnego bez żadnych ograniczeń. Budowany jest na podstawie danych rejestrowanych turystycznymi odbiornikami GPS, pozyskiwanych poprzez wektoryzację zdjęć lotniczych oraz pochodzących z innych nadających się do wykorzystania źródeł, w tym nawet szkiców wykonywanych w terenie. Zgromadzona GEOINFORMATICA POLONICA 20: 2021 DOI 10.4467/21995923GP.21.007.14978 informacja zapisywana jest w centralnej bazie danych, której zawartość jest nie tylko prezentowana na stronie internetowej w postaci cyfrowej mapy, lecz również oferowana do pobrania jako dane wektorowe. Takie dane mogą mieć zastosowanie między innymi do przeprowadzania różnorodnych analiz bazujących na sieciach drogowych, z których najczęściej wykorzystywana jest funkcja wyznaczania optymalnej trasy łączącej wybrane lokalizacje. Wyniki takich analiz można uznać za wiarygodne tylko wtedy, gdy użyte w nich dane będą się charakteryzować odpowiednią jakością. Ponieważ baza danych OSM budowana jest przez pasjonatów, nie są formułowane żadne plany jej systematycznego rozwoju oraz brak jest wbudowanych mechanizmów kontroli jakości. Dlatego w artykule zaproponowano metody i narzędzia, które pozwolą na weryfikację przydatności zgromadzonych do tej pory danych, jak również na poprawę wykrytych błędów. Skupiono się na następujących kategoriach jakości danych geograficznych: dokładności położenia, spójności topologicznej oraz ważności czasowej. Dodatkowo dostrzeżono problem z wyznaczaniem długości poszczególnych obiektów sieci drogowej, związany z metodami pozyskiwania danych i sposobami rejestracji kształtu linii. W związku z tym do przeprowadzenia tak zwanej kalibracji trasy zasugerowano użycie stosowanych w sieciach transportowych słupków kilometrowych i hektometrowych, których lokalizacje są sukcesywnie wprowadzane do bazy danych OSM.


Author(s):  
P. Tymkow ◽  
M. Karpina ◽  
A. Borkowski

The objective of this study is implementation of system architecture for collecting and analysing data as well as visualizing results for hydrodynamic modelling of flood flows in river valleys using remote sensing methods, tree-dimensional geometry of spatial objects and GPU multithread processing. The proposed solution includes: spatial data acquisition segment, data processing and transformation, mathematical modelling of flow phenomena and results visualization. Data acquisition segment was based on aerial laser scanning supplemented by images in visible range. Vector data creation was based on automatic and semiautomatic algorithms of DTM and 3D spatial features modelling. Algorithms for buildings and vegetation geometry modelling were proposed or adopted from literature. The implementation of the framework was designed as modular software using open specifications and partially reusing open source projects. The database structure for gathering and sharing vector data, including flood modelling results, was created using PostgreSQL. For the internal structure of feature classes of spatial objects in a database, the CityGML standard was used. For the hydrodynamic modelling the solutions of Navier-Stokes equations in two-dimensional version was implemented. Visualization of geospatial data and flow model results was transferred to the client side application. This gave the independence from server hardware platform. A real-world case in Poland, which is a part of Widawa River valley near Wroclaw city, was selected to demonstrate the applicability of proposed system.


Author(s):  
P. Tymkow ◽  
M. Karpina ◽  
A. Borkowski

The objective of this study is implementation of system architecture for collecting and analysing data as well as visualizing results for hydrodynamic modelling of flood flows in river valleys using remote sensing methods, tree-dimensional geometry of spatial objects and GPU multithread processing. The proposed solution includes: spatial data acquisition segment, data processing and transformation, mathematical modelling of flow phenomena and results visualization. Data acquisition segment was based on aerial laser scanning supplemented by images in visible range. Vector data creation was based on automatic and semiautomatic algorithms of DTM and 3D spatial features modelling. Algorithms for buildings and vegetation geometry modelling were proposed or adopted from literature. The implementation of the framework was designed as modular software using open specifications and partially reusing open source projects. The database structure for gathering and sharing vector data, including flood modelling results, was created using PostgreSQL. For the internal structure of feature classes of spatial objects in a database, the CityGML standard was used. For the hydrodynamic modelling the solutions of Navier-Stokes equations in two-dimensional version was implemented. Visualization of geospatial data and flow model results was transferred to the client side application. This gave the independence from server hardware platform. A real-world case in Poland, which is a part of Widawa River valley near Wroclaw city, was selected to demonstrate the applicability of proposed system.


Author(s):  
L. Cohen ◽  
E. Keinan ◽  
M. Yaniv ◽  
Y. Tal ◽  
A. Felus ◽  
...  

Technological improvements made in recent years of mass data gathering and analyzing, influenced the traditional methods of updating and forming of the national topographic database. It has brought a significant increase in the number of use cases and detailed geo information demands. Processes which its purpose is to alternate traditional data collection methods developed in many National Mapping and Cadaster Agencies. There has been significant progress in semi-automated methodologies aiming to facilitate updating of a topographic national geodatabase. Implementation of those is expected to allow a considerable reduction of updating costs and operation times. Our previous activity has focused on building automatic extraction (Keinan, Zilberstein et al, 2015). Before semiautomatic updating method, it was common that interpreter identification has to be as detailed as possible to hold most reliable database eventually. When using semi-automatic updating methodologies, the ability to insert human insights based knowledge is limited. Therefore, our motivations were to reduce the created gap by allowing end-users to add their data inputs to the basic geometric database. In this article, we will present a simple Land cover database updating method which combines insights extracted from the analyzed image, and a given spatial data of vector layers. The main stages of the advanced practice are multispectral image segmentation and supervised classification together with given vector data geometric fusion while maintaining the principle of low shape editorial work to be done. All coding was done utilizing open source software components.


2020 ◽  
Vol 9 (12) ◽  
pp. 721
Author(s):  
Dongge Liu ◽  
Tao Wang ◽  
Xiaojuan Li ◽  
Yeqing Ni ◽  
Yanping Li ◽  
...  

Vector data compression can significantly improve efficiency of geospatial data management, visualization and data transmission over internet. Existing compression methods are either based on information theory for lossless compression mainly or based on map generalization methods for lossy compression. Coordinate values of vector spatial data are mostly represented using floating-point type in which data redundancy is small and compression ratio using lossy algorithms is generally better than that of lossless compression algorithms. The purpose of paper is to implement a new algorithm for efficient compression of vector data. The algorithm, named space division based compression (SDC), employs the basic idea of linear Morton and Geohash encoding to convert floating-point type values to strings of binary chain with flexible accuracy level. Morton encoding performs multiresolution regular spatial division to geographic space. Each level of regular grid splits space horizontally and vertically. Row and column numbers in binary forms are bit interleaved to generate one integer representing the location of each grid cell. The integer values of adjacent grid cells are proximal to each other on one dimension. The algorithm can set the number of divisions according to accuracy requirements. Higher accuracy can be achieved with more levels of divisions. In this way, multiresolution vector data compression can be achieved accordingly. The compression efficiency is further improved by grid filtering and binary offset for linear and point geometries. The vector spatial data compression takes visual lossless distance on screen display as accuracy requirement. Experiments and comparisons with available algorithms show that this algorithm produces a higher data rate saving and is more adaptable to different application scenarios.


2020 ◽  
Vol 9 (2) ◽  
pp. 65 ◽  
Author(s):  
Hanme Jang ◽  
Kiyun Yu ◽  
JongHyeon Yang

Although interest in indoor space modeling is increasing, the quantity of indoor spatial data available is currently very scarce compared to its demand. Many studies have been carried out to acquire indoor spatial information from floorplan images because they are relatively cheap and easy to access. However, existing studies do not take international standards and usability into consideration, they consider only 2D geometry. This study aims to generate basic data that can be converted to indoor spatial information using IndoorGML (Indoor Geography Markup Language) thick wall model or the CityGML (City Geography Markup Language) level of detail 2 by creating vector-formed data while preserving wall thickness. To achieve this, recent Convolutional Neural Networks are used on floorplan images to detect wall and door pixels. Additionally, centerline and corner detection algorithms were applied to convert wall and door images into vector data. In this manner, we obtained high-quality raster segmentation results and reliable vector data with node-edge structure and thickness attributes that enabled the structures of vertical and horizontal wall segments and diagonal walls to be determined with precision. Some of the vector results were converted into CityGML and IndoorGML form and visualized, demonstrating the validity of our work.


2020 ◽  
Vol 1 (2) ◽  
pp. 82-87
Author(s):  
Aleksey A. Kolesnikov ◽  
Elena V. Komissarova ◽  
Ivan V. Zhdanov

Currently, data volumes are growing exponentially. Geospatial data is one of the main elements of the concept of Big data. There is a very large number of tools for analyzing Big data, but not all of them take into account the features and have the ability to process geospatial data. The article discusses three popular open analytical tools Hadoop Spatial, GeoSpark, GeoFlink for working with geospatial data of very large volumes. Their architectures, advantages and disadvantages, depending on the execution time and the amount of data used are considered. Processing evaluations were also performed in terms of both streaming and packet data. The experiments were carried out on raster and vector data sets, which are satellite imagery in the visible range, NDVI and NDWI indices, climate indicators (snow cover, precipitation intensity, surface temperature), data from the Open Street Map in the Novosibirsk and Irkutsk Regions.


2021 ◽  
Vol 310 ◽  
pp. 06001
Author(s):  
Alexey A. Kolesnikov ◽  
Pavel M. Kikin

An increasing number of database management systems are expanding their functionality to work with various types of spatial data. This is true for both relational and NoSQL data models. The article describes the main features of those data models for which the functions of storing and processing spatial data are implemented. A comparative analysis of the performance of typical spatial queries for database management systems based on various data models, including multi-model ones, is carried out. The dataset on which the comparison is performed is presented in the form of three blocks of OpenStreetMap vector data for the territory of the Novosibirsk region. Based on the results of the study, recommendations are made on the use of certain data models, depending on the available data and the tasks to be solved.


Sign in / Sign up

Export Citation Format

Share Document