scholarly journals Evaluation of Selected Sub-Elements of Spatial Data Quality on 3D Flood Event Modeling: Case Study of Prešov City, Slovakia

2020 ◽  
Vol 10 (3) ◽  
pp. 820 ◽  
Author(s):  
Marcela Bindzárová Gergeľová ◽  
Žofia Kuzevičová ◽  
Slavomír Labant ◽  
Juraj Gašinec ◽  
Štefan Kuzevič ◽  
...  

Weather-related disasters represent a major threat to the sustainable development of society. This study focuses directly on the assessment of the state of spatial information quality for the needs of hydrodynamic modeling. Based on the selected procedures and methods designed for the collection and processing of spatial information, the aim of this study was to assess their qualitative level of suitability for 3D flood event modeling in accordance with the Infrastructure for Spatial Information in the European Community (INSPIRE) Directive. In the evaluation process we entered geodetic measurements and the digital relief model 3.5 (DMR 3.5) available for the territory of the Slovak Republic. The result of this study is an assessment of the qualitative analysis on three levels: (i) main channel and surrounding topography data from geodetic measurements; (ii) digital relief model; and (iii) hydrodynamic/hydraulic modeling. The qualitative aspect of the input data shows the sensitivity of a given model to changes in the input data quality condition. The average spatial error in the determination of a point’s position was calculated as 0.017 m of all measured points along a watercourse and its slope foot and slope edge. Although the declared accuracy of DMR 3.5 is assumed to be ±2.50 m, in some of the sections in the selected area there were differences in elevation up to 4.79 m. For this reason, we needed a combination of DMR 3.5 and geodetic measurements to refine the input model for the process of hydrodynamic modeling. The quality of the hydrological data for the monitored N annual flow levels was of fourth-class reliability for the selected area.

2019 ◽  
Vol 10 (2) ◽  
pp. 117-125
Author(s):  
Dana Kubíčková ◽  
◽  
Vladimír Nulíček ◽  

The aim of the research project solved at the University of Finance and administration is to construct a new bankruptcy model. The intention is to use data of the firms that have to cease their activities due to bankruptcy. The most common method for bankruptcy model construction is multivariate discriminant analyses (MDA). It allows to derive the indicators most sensitive to the future companies’ failure as a parts of the bankruptcy model. One of the assumptions for using the MDA method and reassuring the reliable results is the normal distribution and independence of the input data. The results of verification of this assumption as the third stage of the project are presented in this article. We have revealed that this assumption is met only in a few selected indicators. Better results were achieved in the indicators in the set of prosperous companies and one year prior the failure. The selected indicators intended for the bankruptcy model construction thus cannot be considered as suitable for using the MDA method.


Author(s):  
G. Vosselman ◽  
S. J. Oude Elberink ◽  
M. Y. Yang

<p><strong>Abstract.</strong> The ISPRS Geospatial Week 2019 is a combination of 13 workshops organised by 30 ISPRS Working Groups active in areas of interest of ISPRS. The Geospatial Week 2019 is held from 10–14 June 2019, and is convened by the University of Twente acting as local organiser. The Geospatial Week 2019 is the fourth edition, after Antalya Turkey in 2013, La Grande Motte France in 2015 and Wuhan China in 2017.</p><p>The following 13 workshops provide excellent opportunities to discuss the latest developments in the fields of sensors, photogrammetry, remote sensing, and spatial information sciences:</p> <ul> <li>C3M&amp;amp;GBD – Collaborative Crowdsourced Cloud Mapping and Geospatial Big Data</li> <li>CHGCS – Cryosphere and Hydrosphere for Global Change Studies</li> <li>EuroCow-M3DMaN – Joint European Calibration and Orientation Workshop and Workshop onMulti-sensor systems for 3D Mapping and Navigation</li> <li>HyperMLPA – Hyperspectral Sensing meets Machine Learning and Pattern Analysis</li> <li>Indoor3D</li> <li>ISSDQ – International Symposium on Spatial Data Quality</li> <li>IWIDF – International Workshop on Image and Data Fusion</li> <li>Laser Scanning</li> <li>PRSM – Planetary Remote Sensing and Mapping</li> <li>SarCon – Advances in SAR: Constellations, Signal processing, and Applications</li> <li>Semantics3D – Semantic Scene Analysis and 3D Reconstruction from Images and ImageSequences</li> <li>SmartGeoApps – Advanced Geospatial Applications for Smart Cities and Regions</li> <li>UAV-g – Unmanned Aerial Vehicles in Geomatics</li> </ul> <p>Many of the workshops are part of well-established series of workshops convened in the past. They cover topics like UAV photogrammetry, laser scanning, spatial data quality, scene understanding, hyperspectral imaging, and crowd sourcing and collaborative mapping with applications ranging from indoor mapping and smart cities to global cryosphere and hydrosphere studies and planetary mapping.</p><p>In total 143 full papers and 357 extended abstracts were submitted by authors from 63 countries. 1250 reviews have been delivered by 295 reviewers. A total of 81 full papers have been accepted for the volume IV-2/W5 of the International Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences. Another 289 papers are published in volume XLII-2/W13 of the International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences.</p><p>The editors would like to thank all contributing authors, reviewers and all workshop organizers for their role in preparing and organizing the Geospatial Week 2019. Thanks to their contributions, we can offer an excessive and varying collection in the Annals and the Archives.</p><p>We hope you enjoy reading the proceedings.</p><p>George Vosselman, Geospatial Week Director 2019, General Chair<br /> Sander Oude Elberink, Programme Chair<br /> Michael Ying Yang, Programme Chair</p>


Author(s):  
J. Kang ◽  
I. Lee

Sophisticated indoor design and growing development in urban architecture make indoor spaces more complex. And the indoor spaces are easily connected to public transportations such as subway and train stations. These phenomena allow to transfer outdoor activities to the indoor spaces. Constant development of technology has a significant impact on people knowledge about services such as location awareness services in the indoor spaces. Thus, it is required to develop the low-cost system to create the 3D model of the indoor spaces for services based on the indoor models. In this paper, we thus introduce the rotating stereo frame camera system that has two cameras and generate the indoor 3D model using the system. First, select a test site and acquired images eight times during one day with different positions and heights of the system. Measurements were complemented by object control points obtained from a total station. As the data were obtained from the different positions and heights of the system, it was possible to make various combinations of data and choose several suitable combinations for input data. Next, we generated the 3D model of the test site using commercial software with previously chosen input data. The last part of the processes will be to evaluate the accuracy of the generated indoor model from selected input data. In summary, this paper introduces the low-cost system to acquire indoor spatial data and generate the 3D model using images acquired by the system. Through this experiments, we ensure that the introduced system is suitable for generating indoor spatial information. The proposed low-cost system will be applied to indoor services based on the indoor spatial information.


Author(s):  
Maria Misankova ◽  
Jana Kliestikova ◽  
Anna Krizanova ◽  
Tatiana Corejova

Brand represents one of the most important assets of the company. Brand-managing activities are typically related to brand positioning and integration with marketing campaigns, and can involve complex decisions. The branding of an organization is indeed a dynamic system with many cause-effect relationships as well as intangible and heterogeneous variables. In order to assess the value of individual brands can be used various models developed worldwide, based on different input data and valuation methodologies. We assume that individual environment in which company operates and consumers’ perceptions in different countries influence the ability and usability of these models in other countries. Therefore, we applied chosen well-known brand value models on the set of Slovak companies and validated their assessment ability in specific condition of the Slovak Republic. This was provided by the critical comparison of calculated values with the official values of brands of these companies listed in the Slovak journal. Through this, we pointed out the importance of the development of unique brand value model, which will be constructed in the specific condition of individual countries and highlight the weak assessment ability of foreign models.


2021 ◽  
Author(s):  
S. H. Al Gharbi ◽  
A. A. Al-Majed ◽  
A. Abdulraheem ◽  
S. Patil ◽  
S. M. Elkatatny

Abstract Due to high demand for energy, oil and gas companies started to drill wells in remote areas and unconventional environments. This raised the complexity of drilling operations, which were already challenging and complex. To adapt, drilling companies expanded their use of the real-time operation center (RTOC) concept, in which real-time drilling data are transmitted from remote sites to companies’ headquarters. In RTOC, groups of subject matter experts monitor the drilling live and provide real-time advice to improve operations. With the increase of drilling operations, processing the volume of generated data is beyond a human's capability, limiting the RTOC impact on certain components of drilling operations. To overcome this limitation, artificial intelligence and machine learning (AI/ML) technologies were introduced to monitor and analyze the real-time drilling data, discover hidden patterns, and provide fast decision-support responses. AI/ML technologies are data-driven technologies, and their quality relies on the quality of the input data: if the quality of the input data is good, the generated output will be good; if not, the generated output will be bad. Unfortunately, due to the harsh environments of drilling sites and the transmission setups, not all of the drilling data is good, which negatively affects the AI/ML results. The objective of this paper is to utilize AI/ML technologies to improve the quality of real-time drilling data. The paper fed a large real-time drilling dataset, consisting of over 150,000 raw data points, into Artificial Neural Network (ANN), Support Vector Machine (SVM) and Decision Tree (DT) models. The models were trained on the valid and not-valid datapoints. The confusion matrix was used to evaluate the different AI/ML models including different internal architectures. Despite the slowness of ANN, it achieved the best result with an accuracy of 78%, compared to 73% and 41% for DT and SVM, respectively. The paper concludes by presenting a process for using AI technology to improve real-time drilling data quality. To the author's knowledge based on literature in the public domain, this paper is one of the first to compare the use of multiple AI/ML techniques for quality improvement of real-time drilling data. The paper provides a guide for improving the quality of real-time drilling data.


Algorithms ◽  
2020 ◽  
Vol 13 (5) ◽  
pp. 107 ◽  
Author(s):  
Otmane Azeroual ◽  
Włodzimierz Lewoniewski

The quality assurance of publication data in collaborative knowledge bases and in current research information systems (CRIS) becomes more and more relevant by the use of freely available spatial information in different application scenarios. When integrating this data into CRIS, it is necessary to be able to recognize and assess their quality. Only then is it possible to compile a result from the available data that fulfills its purpose for the user, namely to deliver reliable data and information. This paper discussed the quality problems of source metadata in Wikipedia and CRIS. Based on real data from over 40 million Wikipedia articles in various languages, we performed preliminary quality analysis of the metadata of scientific publications using a data quality tool. So far, no data quality measurements have been programmed with Python to assess the quality of metadata from scientific publications in Wikipedia and CRIS. With this in mind, we programmed the methods and algorithms as code, but presented it in the form of pseudocode in this paper to measure the quality related to objective data quality dimensions such as completeness, correctness, consistency, and timeliness. This was prepared as a macro service so that the users can use the measurement results with the program code to make a statement about their scientific publications metadata so that the management can rely on high-quality data when making decisions.


2013 ◽  
Vol 2013 ◽  
pp. 1-9
Author(s):  
Ferdinando Di Martino ◽  
Salvatore Sessa

Today it is very difficult to evaluate the quality of spatial databases, mainly for the heterogeneity of input data. We define a fuzzy process for evaluating the reliability of a spatial database: the area of study is partitioned in isoreliable zones, defined as homogeneous zones in terms of data quality and environmental characteristics. We model a spatial database in thematic datasets; each thematic dataset concerns a specific spatial domain and includes a set of layers. We estimate the reliability of each thematic dataset and therefore the overall reliability of the spatial database. We have tested this method on the spatial dataset of the town of Cava de' Tirreni (Italy).


Sign in / Sign up

Export Citation Format

Share Document