scholarly journals CHRONO: a parallel multi-physics library for rigid-body, flexible-body, and fluid dynamics

2013 ◽  
Vol 4 (1) ◽  
pp. 49-64 ◽  
Author(s):  
H. Mazhar ◽  
T. Heyn ◽  
A. Pazouki ◽  
D. Melanz ◽  
A. Seidl ◽  
...  

Abstract. The last decade witnessed a manifest shift in the microprocessor industry towards chip designs that promote parallel computing. Until recently the privilege of a select group of large research centers, Teraflop computing is becoming a commodity owing to inexpensive GPU cards and multi to many-core x86 processors. This paradigm shift towards large scale parallel computing has been leveraged in CHRONO, a freely available C++ multi-physics simulation package. CHRONO is made up of a collection of loosely coupled components that facilitate different aspects of multi-physics modeling, simulation, and visualization. This contribution provides an overview of CHRONO::Engine, CHRONO::Flex, CHRONO::Fluid, and CHRONO::Render, which are modules that can capitalize on the processing power of hundreds of parallel processors. Problems that can be tackled in CHRONO include but are not limited to granular material dynamics, tangled large flexible structures with self contact, particulate flows, and tracked vehicle mobility. The paper presents an overview of each of these modules and illustrates through several examples the potential of this multi-physics library.

Impact ◽  
2019 ◽  
Vol 2019 (10) ◽  
pp. 44-46
Author(s):  
Masato Edahiro ◽  
Masaki Gondo

The pace of technology's advancements is ever-increasing and intelligent systems, such as those found in robots and vehicles, have become larger and more complex. These intelligent systems have a heterogeneous structure, comprising a mixture of modules such as artificial intelligence (AI) and powertrain control modules that facilitate large-scale numerical calculation and real-time periodic processing functions. Information technology expert Professor Masato Edahiro, from the Graduate School of Informatics at the Nagoya University in Japan, explains that concurrent advances in semiconductor research have led to the miniaturisation of semiconductors, allowing a greater number of processors to be mounted on a single chip, increasing potential processing power. 'In addition to general-purpose processors such as CPUs, a mixture of multiple types of accelerators such as GPGPU and FPGA has evolved, producing a more complex and heterogeneous computer architecture,' he says. Edahiro and his partners have been working on the eMBP, a model-based parallelizer (MBP) that offers a mapping system as an efficient way of automatically generating parallel code for multi- and many-core systems. This ensures that once the hardware description is written, eMBP can bridge the gap between software and hardware to ensure that not only is an efficient ecosystem achieved for hardware vendors, but the need for different software vendors to adapt code for their particular platforms is also eliminated.


Author(s):  
Makoto Yoshida ◽  
Kazumine Kojima

Large scale loosely coupled PCs can organize clusters and form desktop computing grids on sharing each processing power; power of PCs, transaction distributions, network scales, network delays, and code migration algorithms characterize the performance of the computing grids. This article describes the design methodologies of workload management in distributed desktop computing grids. Based on the code migration experiments, transfer policy for computation was determined and several simulations for location policies were examined, and the design methodologies for distributed desktop computing grids are derived from the simulation results. The language for distributed desktop computing is designed to accomplish the design methodologies.


2011 ◽  
Vol 3 (4) ◽  
pp. 53-70
Author(s):  
Makoto Yoshida ◽  
Kazumine Kojima

Large scale loosely coupled PCs can organize clusters and form desktop computing grids on sharing each processing power; power of PCs, transaction distributions, network scales, network delays, and code migration algorithms characterize the performance of the computing grids. This article describes the design methodologies of workload management in distributed desktop computing grids. Based on the code migration experiments, transfer policy for computation was determined and several simulations for location policies were examined, and the design methodologies for distributed desktop computing grids are derived from the simulation results. The language for distributed desktop computing is designed to accomplish the design methodologies.


Author(s):  
Toby Heyn ◽  
Hammad Mazhar ◽  
Arman Pazouki ◽  
Daniel Melanz ◽  
Andrew Seidl ◽  
...  

This contribution discusses a multi-physics simulation engine, called Chrono, that relies heavily on parallel computing. Chrono aims at simulating the dynamics of systems containing rigid bodies, flexible (compliant) bodies, and fluid-rigid body interaction. To this end, it relies on five modules: equation formulation (modeling), equation solution (simulation), collision detection support, domain decomposition for parallel computing, and post-processing analysis with emphasis on high quality rendering/visualization. For each component we point out how parallel CPU and/or GPU computing have been leveraged to allow for the simulation of applications with millions of degrees of freedom such as rover dynamics on granular terrain, fluid-structure interaction problems, or large-scale flexible body dynamics with friction and contact for applications in polymer analysis.


2011 ◽  
Vol 34 (4) ◽  
pp. 717-728
Author(s):  
Zu-Ying LUO ◽  
Yin-He HAN ◽  
Guo-Xing ZHAO ◽  
Xian-Chuan YU ◽  
Ming-Quan ZHOU

2021 ◽  
Vol 54 (3) ◽  
pp. 1-33
Author(s):  
Blesson Varghese ◽  
Nan Wang ◽  
David Bermbach ◽  
Cheol-Ho Hong ◽  
Eyal De Lara ◽  
...  

Edge computing is the next Internet frontier that will leverage computing resources located near users, sensors, and data stores to provide more responsive services. Therefore, it is envisioned that a large-scale, geographically dispersed, and resource-rich distributed system will emerge and play a key role in the future Internet. However, given the loosely coupled nature of such complex systems, their operational conditions are expected to change significantly over time. In this context, the performance characteristics of such systems will need to be captured rapidly, which is referred to as performance benchmarking, for application deployment, resource orchestration, and adaptive decision-making. Edge performance benchmarking is a nascent research avenue that has started gaining momentum over the past five years. This article first reviews articles published over the past three decades to trace the history of performance benchmarking from tightly coupled to loosely coupled systems. It then systematically classifies previous research to identify the system under test, techniques analyzed, and benchmark runtime in edge performance benchmarking.


1993 ◽  
Vol 04 (01) ◽  
pp. 137-141
Author(s):  
KLAUS SCHILLING

A short account is presented on the early history, the intentions and the development of large scale parallel computing at the University of Wuppertal. It might serve as an illustration how common activities between computational and computer science can be stimulated, in the university environment.


Sign in / Sign up

Export Citation Format

Share Document