High Performance Computing in Parallel Electromagnetics Simulation Code suite ACE3P

Author(s):  
Lixin Ge ◽  
Zenghai Li ◽  
Cho-Kuen Ng ◽  
Liling Xiao
2014 ◽  
Author(s):  
Fabien Vivodtzev ◽  
Thierry Carrard

In order to guaranty performances of complex systems using numerical simulation, CEA is performing advanced data analysis and scientific visualization with open source software using High Performance Computing (HPC) capability. The diversity of the physics to study produces results of growing complexity in terms of large-scale, high dimensional and multivariate data. Moreover, the HPC approach introduces another layer of complexity by allowing computation amongst thousands of remote cores accessed from sites located hundreds of kilometers away from the computing facility. This paper presents how CEA deploys and contributes to open source software to enable production class visualization tools in a high performance computing context. Among several open source projects used at CEA, this presentation will focus on Visit, VTK and Paraview. In the first part we will address specific issues encountered when deploying VisIt and Paraview in a multi-site supercomputing facility for end-users. Several examples will be given on how such tools can be adapted to take advantage of a parallel setting to explore large multi-block dataset or perform remote visualization on material interface reconstructions of billions of cells. Then, the specific challenges faced to deliver Paraview’s Catalyst capabilities to end-users will be discussed. In the second part, we will describe how CEA contributes to open source visualization software and associated software development strategy by emphasizing on two recent development projects. The first is an integrated simulation workbench providing plugins for every step required to achieve numerical simulation independently on a local or a remote computer. Embedded in an Eclipse RCP environment, VTK views allow the users to perform data input using interaction or mesh preview before running the simulation code. Contributions to VTK have been made in order to smoothly integrate these technologies. The second details how recent developments at CEA have helped to visualize and to analyze results from ExaStamp, a parallel molecular dynamics simulation code dealing with molecular systems ranging from a few millions up to a billion atoms. These developments include a GPU intensive rendering method specialized for atoms and specific parallel algorithms to process molecular data sets.


Author(s):  
Anshu Dubey ◽  
Katie Antypas ◽  
Alan C Calder ◽  
Chris Daley ◽  
Bruce Fryxell ◽  
...  

2021 ◽  
Vol 35 (11) ◽  
pp. 1332-1333
Author(s):  
Lixin Ge ◽  
Zenghai Li ◽  
Cho-Kuen Ng ◽  
Liling Xiao

A comprehensive set of parallel finite-element codes suite ACE3P (Advanced Computational Electromagnetics 3D Parallel) is developed by SLAC for multi-physics modeling of particle accelerators running on massively parallel computer platforms for high fidelity and high accuracy simulation. ACE3P enables rapid virtual prototyping of accelerator and RF component design, optimization and analysis. Advanced modeling capabilities have been facilitated by implementations of novel algorithms for numerical solvers. Code performance on state-of-the-art high performance computing (HPC) platforms for large-scale RF modeling in accelerator applications will be presented in this paper. All the simulations have been performed on the supercomputers at National Energy Research Computer Center (NERSC).


MRS Bulletin ◽  
1997 ◽  
Vol 22 (10) ◽  
pp. 5-6
Author(s):  
Horst D. Simon

Recent events in the high-performance computing industry have concerned scientists and the general public regarding a crisis or a lack of leadership in the field. That concern is understandable considering the industry's history from 1993 to 1996. Cray Research, the historic leader in supercomputing technology, was unable to survive financially as an independent company and was acquired by Silicon Graphics. Two ambitious new companies that introduced new technologies in the late 1980s and early 1990s—Thinking Machines and Kendall Square Research—were commercial failures and went out of business. And Intel, which introduced its Paragon supercomputer in 1994, discontinued production only two years later.During the same time frame, scientists who had finished the laborious task of writing scientific codes to run on vector parallel supercomputers learned that those codes would have to be rewritten if they were to run on the next-generation, highly parallel architecture. Scientists who are not yet involved in high-performance computing are understandably hesitant about committing their time and energy to such an apparently unstable enterprise.However, beneath the commercial chaos of the last several years, a technological revolution has been occurring. The good news is that the revolution is over, leading to five to ten years of predictable stability, steady improvements in system performance, and increased productivity for scientific applications. It is time for scientists who were sitting on the fence to jump in and reap the benefits of the new technology.


2001 ◽  
Author(s):  
Donald J. Fabozzi ◽  
Barney II ◽  
Fugler Blaise ◽  
Koligman Joe ◽  
Jackett Mike ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document