High performance computing through parallel processing

Author(s):  
Martin Barghoorn
2014 ◽  
Author(s):  
Mehdi Gilaki ◽  
Ilya Avdeev

In this study, we have investigated feasibility of using commercial explicit finite element code LS-DYNA on massively parallel super-computing cluster for accurate modeling of structural impact on battery cells. Physical and numerical lateral impact tests have been conducted on cylindrical cells using a flat rigid drop cart in a custom-built drop test apparatus. The main component of cylindrical cell, jellyroll, is a layered spiral structure which consists of thin layers of electrodes and separator. Two numerical approaches were considered: (1) homogenized model of the cell and (2) heterogeneous (full) 3-D cell model. In the first approach, the jellyroll was considered as a homogeneous material with an effective stress-strain curve obtained through experiments. In the second model, individual layers of anode, cathode and separator were accounted for in the model, leading to extremely complex and computationally expensive finite element model. To overcome limitations of desktop computers, high-performance computing (HPC) techniques on a HPC cluster were needed in order to get the results of transient simulations in a reasonable solution time. We have compared two HPC methods used for this model is shared memory parallel processing (SMP) and massively parallel processing (MPP). Both the homogeneous and the heterogeneous models were considered for parallel simulations utilizing different number of computational nodes and cores and the performance of these models was compared. This work brings us one step closer to accurate modeling of structural impact on the entire battery pack that consists of thousands of cells.


2019 ◽  
pp. 28-31
Author(s):  
E. V. Glivenko ◽  
S. А. Sorokin ◽  
G. N. Petrovа

The article is devoted to the design of high‑performance computing devices for parallel processing of information. The problem of  increasing the productivity of computing facilities by one or several orders of magnitude is considered on the example of the high‑ performance electronic computer M‑10, which was created in the 1970s at the NIIVK. If in a conventional computer, the method  of processing numbers is given by commands, then in M‑10, the methods for processing a function were specified by operators  taken from functional analysis. At the same time, the possibility of parallel processing of an entire information line appeared. Such  systems began to be called «functional operator type machines». The main ideas presented in the article may be of interest to  developers of specialized machines of the new generation, as well as engineers involved in the creation of high‑performance  computing devices using technologies of computing platforms.


Author(s):  
Sendil Rangaswamy ◽  
Chris Ellis ◽  
Sergio Tafur ◽  
Ravi Palaniappan ◽  
Seetha Raghavan ◽  
...  

2013 ◽  
Vol 756-759 ◽  
pp. 2825-2828
Author(s):  
Xue Chun Wang ◽  
Quan Lu Zheng

Parallel computing is in parallel computer system for parallel processing of data and information, often also known as the high performance computing or super computing. The content of parallel computing were introduced, the realization of parallel computing and MPI parallel programming under Linux environment were described. The parallel algorithm based on divide and conquer method to solve rectangle placemen problem was designed and implemented with two processors. Finally, Through the performance testing and comparison, we verified the efficiency of parallel computing.


2005 ◽  
Vol 47 (3) ◽  
Author(s):  
Rainer Hagenau ◽  
Carsten Albrecht ◽  
Erik Maehle ◽  
Andreas C. Döring

SummuryParallel processing is well established in high-performance computing. Currently, network processors as new emerging, special-purpose processors are targeted at the exploitation of parallelism to meet the requirements in data-plane processing with wire-speed. The achievable level of parallelism is determined by decisions in the architecture design and by the characteristics of the data-plane applications executed. We discuss two basic approaches in parallel processing, namely pipelining and concurrency, which establish basic models for parallel network processor organization. The features and constraints of these models are studied. Using this background some existing network processor architectures are reviewed and characterized regarding their potential in parallel data-plane processing.


MRS Bulletin ◽  
1997 ◽  
Vol 22 (10) ◽  
pp. 5-6
Author(s):  
Horst D. Simon

Recent events in the high-performance computing industry have concerned scientists and the general public regarding a crisis or a lack of leadership in the field. That concern is understandable considering the industry's history from 1993 to 1996. Cray Research, the historic leader in supercomputing technology, was unable to survive financially as an independent company and was acquired by Silicon Graphics. Two ambitious new companies that introduced new technologies in the late 1980s and early 1990s—Thinking Machines and Kendall Square Research—were commercial failures and went out of business. And Intel, which introduced its Paragon supercomputer in 1994, discontinued production only two years later.During the same time frame, scientists who had finished the laborious task of writing scientific codes to run on vector parallel supercomputers learned that those codes would have to be rewritten if they were to run on the next-generation, highly parallel architecture. Scientists who are not yet involved in high-performance computing are understandably hesitant about committing their time and energy to such an apparently unstable enterprise.However, beneath the commercial chaos of the last several years, a technological revolution has been occurring. The good news is that the revolution is over, leading to five to ten years of predictable stability, steady improvements in system performance, and increased productivity for scientific applications. It is time for scientists who were sitting on the fence to jump in and reap the benefits of the new technology.


Sign in / Sign up

Export Citation Format

Share Document