Visualising large-scale geodynamic simulations: How to Dive into Earth's Mantle with Virtual Reality

Author(s):  
Markus Wiedemann ◽  
Bernhard S.A. Schuberth ◽  
Lorenzo Colli ◽  
Hans-Peter Bunge ◽  
Dieter Kranzlmüller

<p>Precise knowledge of the forces acting at the base of tectonic plates is of fundamental importance, but models of mantle dynamics are still often qualitative in nature to date. One particular problem is that we cannot access the deep interior of our planet and can therefore not make direct in situ measurements of the relevant physical parameters. Fortunately, modern software and powerful high-performance computing infrastructures allow us to generate complex three-dimensional models of the time evolution of mantle flow through large-scale numerical simulations.</p><p>In this project, we aim at visualizing the resulting convective patterns that occur thousands of kilometres below our feet and to make them "accessible" using high-end virtual reality techniques.</p><p>Models with several hundred million grid cells are nowadays possible using the modern supercomputing facilities, such as those available at the Leibniz Supercomputing Centre. These models provide quantitative estimates on the inaccessible parameters, such as buoyancy and temperature, as well as predictions of the associated gravity field and seismic wavefield that can be tested against Earth observations.</p><p>3-D visualizations of the computed physical parameters allow us to inspect the models such as if one were actually travelling down into the Earth. This way, convective processes that occur thousands of kilometres below our feet are virtually accessible by combining the simulations with high-end VR techniques.</p><p>The large data set used here poses severe challenges for real time visualization, because it cannot fit into graphics memory, while requiring rendering with strict deadlines. This raises the necessity to balance the amount of displayed data versus the time needed for rendering it.</p><p>As a solution, we introduce a rendering framework and describe our workflow that allows us to visualize this geoscientific dataset. Our example exceeds 16 TByte in size, which is beyond the capabilities of most visualization tools. To display this dataset in real-time, we reduce and declutter the dataset through isosurfacing and mesh optimization techniques.</p><p>Our rendering framework relies on multithreading and data decoupling mechanisms that allow to upload data to graphics memory while maintaining high frame rates. The final visualization application can be executed in a CAVE installation as well as on head mounted displays such as the HTC Vive or Oculus Rift. The latter devices will allow for viewing our example on-site at the EGU conference.</p>

2019 ◽  
Vol 34 (4) ◽  
pp. 335-348
Author(s):  
Do Quoc Truong ◽  
Pham Ngoc Phuong ◽  
Tran Hoang Tung ◽  
Luong Chi Mai

Automatic Speech Recognition (ASR) systems convert human speech into the corresponding transcription automatically. They have a wide range of applications such as controlling robots, call center analytics, voice chatbot. Recent studies on ASR for English have achieved the performance that surpasses human ability. The systems were trained on a large amount of training data and performed well under many environments. With regards to Vietnamese, there have been many studies on improving the performance of existing ASR systems, however, many of them are conducted on a small-scaled data, which does not reflect realistic scenarios. Although the corpora used to train the system were carefully design to maintain phonetic balance properties, efforts in collecting them at a large-scale are still limited. Specifically, only a certain accent of Vietnam was evaluated in existing works. In this paper, we first describe our efforts in collecting a large data set that covers all 3 major accents of Vietnam located in the Northern, Center, and Southern regions. Then, we detail our ASR system development procedure utilizing the collected data set and evaluating different model architectures to find the best structure for Vietnamese. In the VLSP 2018 challenge, our system achieved the best performance with 6.5% WER and on our internal test set with more than 10 hours of speech collected real environments, the system also performs well with 11% WER


Author(s):  
Lior Shamir

Abstract Several recent observations using large data sets of galaxies showed non-random distribution of the spin directions of spiral galaxies, even when the galaxies are too far from each other to have gravitational interaction. Here, a data set of $\sim8.7\cdot10^3$ spiral galaxies imaged by Hubble Space Telescope (HST) is used to test and profile a possible asymmetry between galaxy spin directions. The asymmetry between galaxies with opposite spin directions is compared to the asymmetry of galaxies from the Sloan Digital Sky Survey. The two data sets contain different galaxies at different redshift ranges, and each data set was annotated using a different annotation method. The results show that both data sets show a similar asymmetry in the COSMOS field, which is covered by both telescopes. Fitting the asymmetry of the galaxies to cosine dependence shows a dipole axis with probabilities of $\sim2.8\sigma$ and $\sim7.38\sigma$ in HST and SDSS, respectively. The most likely dipole axis identified in the HST galaxies is at $(\alpha=78^{\rm o},\delta=47^{\rm o})$ and is well within the $1\sigma$ error range compared to the location of the most likely dipole axis in the SDSS galaxies with $z>0.15$ , identified at $(\alpha=71^{\rm o},\delta=61^{\rm o})$ .


2018 ◽  
Vol 7 (12) ◽  
pp. 467 ◽  
Author(s):  
Mengyu Ma ◽  
Ye Wu ◽  
Wenze Luo ◽  
Luo Chen ◽  
Jun Li ◽  
...  

Buffer analysis, a fundamental function in a geographic information system (GIS), identifies areas by the surrounding geographic features within a given distance. Real-time buffer analysis for large-scale spatial data remains a challenging problem since the computational scales of conventional data-oriented methods expand rapidly with increasing data volume. In this paper, we introduce HiBuffer, a visualization-oriented model for real-time buffer analysis. An efficient buffer generation method is proposed which introduces spatial indexes and a corresponding query strategy. Buffer results are organized into a tile-pyramid structure to enable stepless zooming. Moreover, a fully optimized hybrid parallel processing architecture is proposed for the real-time buffer analysis of large-scale spatial data. Experiments using real-world datasets show that our approach can reduce computation time by up to several orders of magnitude while preserving superior visualization effects. Additional experiments were conducted to analyze the influence of spatial data density, buffer radius, and request rate on HiBuffer performance, and the results demonstrate the adaptability and stability of HiBuffer. The parallel scalability of HiBuffer was also tested, showing that HiBuffer achieves high performance of parallel acceleration. Experimental results verify that HiBuffer is capable of handling 10-million-scale data.


2008 ◽  
Vol 08 (02) ◽  
pp. 189-207
Author(s):  
JINGHUA GE ◽  
DANIEL J. SANDIN ◽  
TOM PETERKA ◽  
ROBERT KOOIMA ◽  
JAVIER I. GIRADO ◽  
...  

High speed interactive virtual reality (VR) exploration of scientific datasets is a challenge when the visualization is computationally expensive. This paper presents a point-based remote visualization pipeline for real-time virtual reality (VR) with asynchronous client-server coupling. Steered by the client-end frustum request, the remote server samples the original dataset into 3D point samples and sends them back to the client for view updating. From every view updating frame, the client incrementally builds up a point-based geometry under an octree-based space partition hierarchy. At every view-reconstruction frame, the client continuously splats the available points onto the screen with efficient occlusion culling and view-dependent level of detail (LOD) control. An experimental visualization framework with a server-end computer cluster and a client-end head-tracked autostereo VR desktop display is used to visualize large-scale mesh datasets and ray-traced 4D Julia set datasets. The overall performance of the VR view reconstruction is about 15 fps and independent of the original dataset complexity.


2021 ◽  
Author(s):  
Ahmed Alghamdi ◽  
Olakunle Ayoola ◽  
Khalid Mulhem ◽  
Mutlaq Otaibi ◽  
Abdulazeez Abdulraheem

Abstract Chokes are an integral part of production systems and are crucial surface equipment that faces rough conditions such as high-pressure drops and erosion due to solids. Predicting choke health is usually achieved by analyzing the relationship of choke size, pressure, and flow rate. In large-scale fields, this process requires extensive-time and effort using the conventional techniques. This paper presents a real-time proactive approach to detect choke wear utilizing production data integrated with AI analytics. Flowing parameters data were collected for more than 30 gas wells. These wells are producing gas with slight solids production from a high-pressure high-temperature field. In addition, these wells are equipped with a multi-stage choke system. The approach of determining choke wear relies on training the AI model on a dataset constructed by comparison of the choke valve rate of change with respect to a smoother slope of the production rate. If the rate of change is not within a tolerated range of divergence, an abnormal choke behavior is detected. The data set was divided into 70% for training and 30% for testing. Artificial Neural Network (ANN) was trained on data that has the following inputs: gas specific gravity, upstream & downstream pressure and temperature, and choke size. This ANN model achieved a correlation coefficient above 0.9 with an excellent prediction on the data points exhibiting normal or abnormal choke behaviors. Piloting this application on large fields, where manual analysis is often impractical, saves a substantial man-hour and generates significant cost-avoidance. Areas for improvement in such an application depends on equipping the ANN network with long-term production profile prediction abilities, such as water production, and this analysis relies on having an accurate reading from the venturi meters, which is often the case in single-phase flow. The application of this AI-driven analytics provides tremendous improvement for remote offshore production operations surveillance. The novel approach presented in this paper capitalizes on the AI analytics for estimating proactively detecting choke health conditions. The advantages of such a model are that it harnesses AI analytics to help operators improve asset integrity and production monitoring compliance. In addition, this approach can be expanded to estimate sand production as choke wear is a strong function of sand production.


Author(s):  
Valentin Cristea ◽  
Ciprian Dobre ◽  
Corina Stratan ◽  
Florin Pop

The latest advances in network and distributedsystem technologies now allow integration of a vast variety of services with almost unlimited processing power, using large amounts of data. Sharing of resources is often viewed as the key goal for distributed systems, and in this context the sharing of stored data appears as the most important aspect of distributed resource sharing. Scientific applications are the first to take advantage of such environments as the requirements of current and future high performance computing experiments are pressing, in terms of even higher volumes of issued data to be stored and managed. While these new environments reveal huge opportunities for large-scale distributed data storage and management, they also raise important technical challenges, which need to be addressed. The ability to support persistent storage of data on behalf of users, the consistent distribution of up-to-date data, the reliable replication of fast changing datasets or the efficient management of large data transfers are just some of these new challenges. In this chapter we discuss how the existing distributed computing infrastructure is adequate for supporting the required data storage and management functionalities. We highlight the issues raised from storing data over large distributed environments and discuss the recent research efforts dealing with challenges of data retrieval, replication and fast data transfers. Interaction of data management with other data sensitive, emerging technologies as the workflow management is also addressed.


2018 ◽  
Vol 7 (4.6) ◽  
pp. 13
Author(s):  
Mekala Sandhya ◽  
Ashish Ladda ◽  
Dr. Uma N Dulhare ◽  
. . ◽  
. .

In this generation of Internet, information and data are growing continuously. Even though various Internet services and applications. The amount of information is increasing rapidly. Hundred billions even trillions of web indexes exist. Such large data brings people a mass of information and more difficulty discovering useful knowledge in these huge amounts of data at the same time. Cloud computing can provide infrastructure for large data. Cloud computing has two significant characteristics of distributed computing i.e. scalability, high availability. The scalability can seamlessly extend to large-scale clusters. Availability says that cloud computing can bear node errors. Node failures will not affect the program to run correctly. Cloud computing with data mining does significant data processing through high-performance machine. Mass data storage and distributed computing provide a new method for mass data mining and become an effective solution to the distributed storage and efficient computing in data mining. 


Author(s):  
Manudul Pahansen de Alwis ◽  
Karl Garme

The stochastic environmental conditions together with craft design and operational characteristics make it difficult to predict the vibration environments aboard high-performance marine craft, particularly the risk of impact acceleration events and the shock component of the exposure often being associated with structural failure and human injuries. The different timescales and the magnitudes involved complicate the real-time analysis of vibration and shock conditions aboard these craft. The article introduces a new measure, severity index, indicating the risk of severe impact acceleration, and proposes a method for real-time feedback on the severity of impact exposure together with accumulated vibration exposure. The method analyzes the immediate 60 s of vibration exposure history and computes the severity of impact exposure as for the present state based on severity index. The severity index probes the characteristic of the present acceleration stochastic process, that is, the risk of an upcoming heavy impact, and serves as an alert to the crew. The accumulated vibration exposure, important for mapping and logging the crew exposure, is determined by the ISO 2631:1997 vibration dose value. The severity due to the impact and accumulated vibration exposure is communicated to the crew every second as a color-coded indicator: green, yellow and red, representing low, medium and high, based on defined impact and dose limits. The severity index and feedback method are developed and validated by a data set of 27 three-hour simulations of a planning craft in irregular waves and verified for its feasibility in real-world applications by full-scale acceleration data recorded aboard high-speed planing craft in operation.


2019 ◽  
Author(s):  
Reto Sterchi ◽  
Pascal Haegeli ◽  
Patrick Mair

Abstract. While guides in mechanized skiing operations use a well-established terrain selection process to limit their exposure to avalanche hazard and keep the residual risk at an acceptable level, the relationship between the open/closed status of runs and environmental factors is complex and has so far only received limited attention from research. Using a large data set of over 25 000 operational run list codes from a mechanized skiing operation, we applied a general linear mixed effects model to explore the relationship between acceptable skiing terrain (i.e., status open) and avalanche hazard conditions. Our results show that the magnitude of the effect of avalanche hazard on run list codes depends on the type of terrain that is being assessed by the guiding team. Ski runs in severe alpine terrain with steep lines through large avalanche slopes are much more susceptible to increases in avalanche hazard than less severe terrain. However, our results also highlight the strong effects of recent skiing on the run coding and thus the importance of prior first-hand experience. Expressing these relationships numerically provides an important step towards the development of meaningful decision aids, which can assist commercial operations to manage their avalanche risk more effectively and efficiently.


2014 ◽  
Vol 687-691 ◽  
pp. 1258-1261
Author(s):  
Jing Sun ◽  
Hong Tao Wang

With the development of computer graphics, real-time rendering-based VF: technology has been applied in more and more fields. LOD is the key technology in large-scale terrain rendering. In this paper, the basic concept of LOD is introduced briefly and some algorithms of LOD in use are mentioned and analyzed; secondly as one of algorithms of LOD, View-Dependent Progressive Mesh algorithm is studied and improved, the result of implementing the large-scale terrain’s LOD by using VDPM is presented. There are key technologies in LOD Large-scale terrain real-time rendering are researched. Relative technologies are presented such as: LOD of the terrain, visibility culling, and cracks eliminate, view-dependent refine, LOD error, technologies of texture etc. Using LOD technology, VR system can greatly reduce the; number of polygons produced in real-time rendering procedure. Finally, we do experimental design work based on the methods and techniques presented by this paper.


Sign in / Sign up

Export Citation Format

Share Document