Remote visualization of large scale fast dynamic simulations in a HPC context

Author(s):  
Fabien Vivodtzev ◽  
Isabelle Bertron
2008 ◽  
Vol 08 (02) ◽  
pp. 189-207
Author(s):  
JINGHUA GE ◽  
DANIEL J. SANDIN ◽  
TOM PETERKA ◽  
ROBERT KOOIMA ◽  
JAVIER I. GIRADO ◽  
...  

High speed interactive virtual reality (VR) exploration of scientific datasets is a challenge when the visualization is computationally expensive. This paper presents a point-based remote visualization pipeline for real-time virtual reality (VR) with asynchronous client-server coupling. Steered by the client-end frustum request, the remote server samples the original dataset into 3D point samples and sends them back to the client for view updating. From every view updating frame, the client incrementally builds up a point-based geometry under an octree-based space partition hierarchy. At every view-reconstruction frame, the client continuously splats the available points onto the screen with efficient occlusion culling and view-dependent level of detail (LOD) control. An experimental visualization framework with a server-end computer cluster and a client-end head-tracked autostereo VR desktop display is used to visualize large-scale mesh datasets and ray-traced 4D Julia set datasets. The overall performance of the VR view reconstruction is about 15 fps and independent of the original dataset complexity.


2011 ◽  
Vol 308-310 ◽  
pp. 2095-2103
Author(s):  
Fei Feng ◽  
Yi Wei Liu ◽  
Hong Liu ◽  
He Gao Cai

The space manipulator which is mounted on a space structure or spacecraft to manipulate space payloads is important for the on-orbit-servicing. Its manipulation tasks depend on its end-effector. The flexibility of the large space manipulator will result in residual vibration on its tip, and let the manipulator have poor capability of end positioning. To overcome the drawbacks mentioned-above, the end-effector needs strong capability of misalignment tolerance and soft capturing. On the base of these requirements and analysis, two kinds of end-effector schemes are presented and designed in detail. The essential performances are in comparison based on the results of dynamic simulations and experiments. Consequently, the conclusion is drawn that the steel cable-snared end-effector which captures the interface by winding the grapple fixture probe, is the best scheme that can combine the ability of soft capturing and great misalignment tolerance perfectly.


Author(s):  
Martin Schultze ◽  
Darryl G. Thelen

Muscle actuated forward dynamic simulations have provided tremendous insights into the mechanics of locomotion. However, the controllers used for large scale simulations have often been open-loop, with the muscle excitations prescribed as a function of time [1]. Due to the inherently unstable nature of bipedal movement, this means that perturbation-type analyses are often limited to short time frames after the perturbation is introduced [2]. However for many clinical problems, it would be desirable to predict how periodic locomotion reestablishes following a change to the system or perturbation from the environment.


Author(s):  
Julian Freytes ◽  
Lampros Papangelis ◽  
Hani Saad ◽  
Pierre Rault ◽  
Thierry Van Cutsem ◽  
...  

2014 ◽  
Vol 14 (02) ◽  
pp. 1350057 ◽  
Author(s):  
R. D. FIROUZ-ABADI ◽  
H. MOHAMMADKHANI ◽  
H. AMINI

An efficient hybrid modal-molecular dynamics method is developed for the vibration analysis of large scale nanostructures. Using the reduced order method, presented in this paper, linear and nonlinear vibrations of a suspended graphene nanoribbon (GNR) carrying an electric current in a harmonic magnetic field are investigated. The resonance frequencies as well as the nonlinear vibration response obtained by the present technique and direct molecular dynamic simulations are in very good agreement. Also, the obtained results illustrate the hardening behavior of nonlinear vibrations which is diminished by stretching the GNR.


2021 ◽  
Author(s):  
Murtadha Al-Habib ◽  
Yasser Al-Ghamdi

Abstract Extensive computing resources are required to leverage todays advanced geoscience workflows that are used to explore and characterize giant petroleum resources. In these cases, high-performance workstations are often unable to adequately handle the scale of computing required. The workflows typically utilize complex and massive data sets, which require advanced computing resources to store, process, manage, and visualize various forms of the data throughout the various lifecycles. This work describes a large-scale geoscience end-to-end interpretation platform customized to run on a cluster-based remote visualization environment. A team of computing infrastructure and geoscience workflow experts was established to collaborate on the deployment, which was broken down into separate phases. Initially, an evaluation and analysis phase was conducted to analyze computing requirements and assess potential solutions. A testing environment was then designed, implemented and benchmarked. The third phase used the test environment to determine the scale of infrastructure required for the production environment. Finally, the full-scale customized production environment was deployed for end users. During testing phase, aspects such as connectivity, stability, interactivity, functionality, and performance were investigated using the largest available geoscience datasets. Multiple computing configurations were benchmarked until optimal performance was achieved, under applicable corporate information security guidelines. It was observed that the customized production environment was able to execute workflows that were unable to run on local user workstations. For example, while conducting connectivity, stability and interactivity benchmarking, the test environment was operated for extended periods to ensure stability for workflows that require multiple days to run. To estimate the scale of the required production environment, varying categories of users’ portfolio were determined based on data type, scale and workflow. Continuous monitoring of system resources and utilization enabled continuous improvements to the final solution. The utilization of a fit-for-purpose, customized remote visualization solution may reduce or ultimately eliminate the need to deploy high-end workstations to all end users. Rather, a shared, scalable and reliable cluster-based solution can serve a much larger user community in a highly performant manner.


Author(s):  
Radhika S. Saksena ◽  
Marco D. Mazzeo ◽  
Stefan J. Zasada ◽  
Peter V. Coveney

We present very large-scale rheological studies of self-assembled cubic gyroid liquid crystalline phases in ternary mixtures of oil, water and amphiphilic species performed on petascale supercomputers using the lattice-Boltzmann method. These nanomaterials have found diverse applications in materials science and biotechnology, for example, in photovoltaic devices and protein crystallization. They are increasingly gaining importance as delivery vehicles for active agents in pharmaceuticals, personal care products and food technology. In many of these applications, the self-assembled structures are subject to flows of varying strengths and we endeavour to understand their rheological response with the objective of eventually predicting it under given flow conditions. Computationally, our lattice-Boltzmann simulations of ternary fluids are inherently memory- and data-intensive. Furthermore, our interest in dynamical processes necessitates remote visualization and analysis as well as the associated transfer and storage of terabytes of time-dependent data. These simulations are distributed on a high-performance grid infrastructure using the application hosting environment; we employ a novel parallel in situ visualization approach which is particularly suited for such computations on petascale resources. We present computational and I/O performance benchmarks of our application on three different petascale systems.


Sign in / Sign up

Export Citation Format

Share Document