Implementation and Validation of a Fully Mechanistic Fatigue Modeling Approach in a High Performance Computing Framework

Author(s):  
Bipul Barua ◽  
Subhasish Mohanty ◽  
Saurindranath Majumdar ◽  
Krishnamurti Natesan

Abstract Current approaches of fatigue evaluation of nuclear reactor components or other safety critical structural systems use S∼N curve based empirical relations which may have large uncertainty. This uncertainty may be reduced by using a more mechanistic approach. In the proposed mechanistic approach, material models are developed based on the evolution of material behavior under uniaxial fatigue experiments and implement those models into 3D finite element (FE) calculations for fatigue evaluation under multiaxial loading. However, this approach requires simulating structures under thousands of fatigue cycles which necessitates the use of high performance computing (HPC) to determine fatigue life of a large component/system within reasonable time frame. Speeding up the FE simulation of large systems requires the use of a higher number of cores, which is extremely costly, particularly when a commercial FE code is used. Also, commercial software is not necessarily optimized for use in an HPC environment. In this work, an open source parallel computing solver along with a multi-core cluster is used to scale up the number of cores. The HPC-based mechanistic fatigue modeling framework is validated through evaluating fatigue life of a pressurized water reactor surge line pipe under idealistic loading cycles and comparing the simulation results with observations from uniaxial fatigue experiment of 316 stainless steel specimen.

MRS Bulletin ◽  
1997 ◽  
Vol 22 (10) ◽  
pp. 5-6
Author(s):  
Horst D. Simon

Recent events in the high-performance computing industry have concerned scientists and the general public regarding a crisis or a lack of leadership in the field. That concern is understandable considering the industry's history from 1993 to 1996. Cray Research, the historic leader in supercomputing technology, was unable to survive financially as an independent company and was acquired by Silicon Graphics. Two ambitious new companies that introduced new technologies in the late 1980s and early 1990s—Thinking Machines and Kendall Square Research—were commercial failures and went out of business. And Intel, which introduced its Paragon supercomputer in 1994, discontinued production only two years later.During the same time frame, scientists who had finished the laborious task of writing scientific codes to run on vector parallel supercomputers learned that those codes would have to be rewritten if they were to run on the next-generation, highly parallel architecture. Scientists who are not yet involved in high-performance computing are understandably hesitant about committing their time and energy to such an apparently unstable enterprise.However, beneath the commercial chaos of the last several years, a technological revolution has been occurring. The good news is that the revolution is over, leading to five to ten years of predictable stability, steady improvements in system performance, and increased productivity for scientific applications. It is time for scientists who were sitting on the fence to jump in and reap the benefits of the new technology.


2001 ◽  
Author(s):  
Donald J. Fabozzi ◽  
Barney II ◽  
Fugler Blaise ◽  
Koligman Joe ◽  
Jackett Mike ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document