scholarly journals Scalable processing of massive geodata in the cloud: generating a level-of-detail structure optimized for web visualization

2020 ◽  
Vol 1 ◽  
pp. 1-20
Author(s):  
Michel Krämer ◽  
Ralf Gutbell ◽  
Hendrik M. Würz ◽  
Jannis Weil

Abstract. We present a cloud-based approach to transform arbitrarily large terrain data to a hierarchical level-of-detail structure that is optimized for web visualization. Our approach is based on a divide-and-conquer strategy. The input data is split into tiles that are distributed to individual workers in the cloud. These workers apply a Delaunay triangulation with a maximum number of points and a maximum geometric error. They merge the results and triangulate them again to generate less detailed tiles. The process repeats until a hierarchical tree of different levels of detail has been created. This tree can be used to stream the data to the web browser. We have implemented this approach in the frameworks Apache Spark and GeoTrellis. Our paper includes an evaluation of our approach and the implementation. We focus on scalability and runtime but also investigate bottlenecks, possible reasons for them, as well as options for mitigation. The results of our evaluation show that our approach and implementation are scalable and that we are able to process massive terrain data.

Author(s):  
Sven H. Reese ◽  
Johannes Seichter ◽  
Dietmar Klucke

Consideration of environmentally assisted fatigue (EAF) is in discussion internationally. In German KTA Rules the effect is taken into account by means of so called attention thresholds. While the laboratory phenomena themselves are being accepted widely, numerical calculation procedures are revised continuously and transition from laboratory to real plant components is not clarified yet. Since NUREG/CR-6909, formulas for calculating the Fen factors have been modified several times. For example in ANL-LWRS47-2011 a new set of formulas was published and slightly revised by ANL in 2012. Various calculation procedures like the strain-integrated method and simplified approach have been published while each approach yields to different results. Beyond this, additional topics like weld factors or plasticity correction factors have to be taken into account. Calculation procedures depending on the level of detail and in the description of loads are yielding to significant variations in the results. Respecting these topics in context of different levels of detail in computational simulations, numerical cumulative usage factor (CUF) evaluation results are likely to differ, depending on the assumptions made. On the basis of a practical example, methods and approaches will be discussed and recommendations in terms of avoiding over-conservatism and misinterpretation will be presented.


2021 ◽  
Vol 1209 (1) ◽  
pp. 012002
Author(s):  
Y Nechyporchuk ◽  
R Baskova

Abstract 4D modeling has been actively developing over the past decade along with the progress of BIM implementation. 4D model can provide enhanced early decisions about the space-temporal criticality of work elements. This models is a collection of graphical and scheduling information about an object. These inputs can have different levels of detail (LOD). In creating and using BIM projects, the LOD of datasets is an important aspect. However, to date there is limited research thoroughly investigating the issue of LOD within 4D models. The article provides an overview of studies related to the level of detail for 4D models, and also describes the impact of LOD on the final 4D model.


Information ◽  
2019 ◽  
Vol 10 (10) ◽  
pp. 302
Author(s):  
Liam McNabb ◽  
Robert S. Laramee

Maps are one of the most conventional types of visualization used when conveying information to both inexperienced users and advanced analysts. However, the multivariate representation of data on maps is still considered an unsolved problem. We present a multivariate map that uses geo-space to guide the position of multivariate glyphs and enable users to interact with the map and glyphs, conveying meaningful data at different levels of detail. We develop an algorithm pipeline for this process and demonstrate how the user can adjust the level-of-detail of the resulting imagery. The algorithm features a unique combination of guided glyph placement, level-of-detail, dynamic zooming, and smooth transitions. We present a selection of user options to facilitate the exploration process and provide case studies to support how the application can be used. We also compare our placement algorithm with previous geo-spatial glyph placement algorithms. The result is a novel glyph placement solution to support multi-variate maps.


2020 ◽  
Vol 15 (90) ◽  
pp. 78-90
Author(s):  
Alexey A. Veselov ◽  

In designing modern computer equipment and digital electronics, the use of simulation models is of great importance. At first, monolithic models were widely used for this. However, they worked well only when their size was relatively small. Because of it developers began to refuse gradually use of monolithic models and to pass to use of the distributed models allowing to increase their speed and to expand borders of their admissible sizes. At the same time, they begin to pay special attention to hierarchical distributed models, which provide the opportunity to investigate the behavior of the created devices at different levels of detail. Similar models made it possible to noticeably expand the permissible boundaries of their sizes and increase the speed of work. However, such distributed models have the disadvantage that their effectiveness is noticeably dependent not only on the number of components included in their composition, but also on the size of these components. he paper presents the results of a study of the effect of introducing an additional upper hierarchical level on the performance of distributed models based on Petri networks. The use of such a method of modifying distributed models leads to an increase in their speed in a wide range of changes in their sizes. At the same time, the most significant effect achieved in distributed models containing a large number of small components. The maximum speed of the thus modified models can be an order of magnitude higher than that of the non-modified ones. As a result, in addition to the overall increase in the efficiency of the modified hierarchical distributed models, this also led to a significant equalization of the performance of the modified distributed models with subordinate components of different sizes.


2011 ◽  
Vol 21 (3-4) ◽  
pp. 135-140 ◽  
Author(s):  
Toni A. Krol ◽  
Sebastian Westhäuser ◽  
M. F. Zäh ◽  
Johannes Schilp ◽  
G. Groth

2009 ◽  
pp. 648-657
Author(s):  
Sandra Elizabeth González Císaro ◽  
Héctor Oscar Nigro

Much information stored in current databases is not always present at necessary different levels of detail or granularity for Decision-Making Processes (DMP). Some organizations have implemented the use of central database - Data Warehouse (DW) - where information performs analysis tasks. This fact depends on the Information Systems (IS) maturity, the type of informational requirements or necessities the organizational structure and business own characteristic. A further important point is the intrinsic structure of complex data; nowadays it is very common to work with complex data, due to syntactic or semantic aspects and the processing type (Darmont et al., 2006). Therefore, we must design systems, which can to maintain data complexity to improve the DMP.


1988 ◽  
Vol 121 ◽  
Author(s):  
Jean-Claude Pouxviel ◽  
J. P. Boilot

ABSTRACTTEOS has been hydrolysed under acidic condition with stoichiome-tric or excess amount of water. Evolution of the silicon species is followed by 29Si NMR. The data are analyzed at different levels of detail; first, analysis of the by products of polymerization reactions, second determination of the extents and overall rate constants of hydrolysis and condensation reactions and finally kinetics simulations of the evolution taking into account all the present silicon species. We have shown that the hydrolysis rate increases with the number of hydroxyl groups, and the reesterification reactions have a significant contribution. We also found that condensation reactions preferentially occur with loss of water between the more hydrolyzed monomers; their rates rapidly decrease with the degree of condensation. We compare the two compositions as a function of their water content and pH.


2015 ◽  
Vol 42 (3) ◽  
pp. 199-212 ◽  
Author(s):  
Farnaz Sadeghpour ◽  
Mohsen Andayesh

The efficient planning of site space through the course of a construction project is referred to as site layout planning. Due to its impact on safety, productivity and security on construction sites, several site layout planning models have been developed in the past decades. These models have the common aim of generating best layouts considering the defined constraints and conditions. However, the underlying assumptions that were made during the development of these models seem disparate and often implicit. This study provides an overview of the existing models and aims to draw a holistic view of variables that have been considered at different levels of detail and using different approaches in the site layout literature. Through close examination and comparative analysis of existing models, this study identifies the components that need to be considered for site layout modeling, referred to as constructs. Possible approaches that can be used to realize each construct are presented, and the advantages and disadvantages of these approaches are discussed. It is hoped that this study contributes to a better understanding of site layout modeling, and provide an outline for the development of new site layout planning models.


Sign in / Sign up

Export Citation Format

Share Document