Simulation of clinical electrophysiology in 3D human atria: a high‐performance computing and high‐performance visualization application

2008 ◽  
Vol 20 (11) ◽  
pp. 1317-1328 ◽  
Author(s):  
Sanjay Kharche ◽  
Gunnar Seemann ◽  
Lee Margetts ◽  
Joanna Leng ◽  
Arun V. Holden ◽  
...  
2019 ◽  
Author(s):  
Владимир Авербух ◽  
Vladimir Averbukh ◽  
Александр Берсенев ◽  
Alexander Bersenev ◽  
Маджид Форгани ◽  
...  

In the paper we present the situation which had required visualization of a large amount of non-trivial objects, such as supercomputer’s tasks. The method of visualization of these objects was hard to find. Then we used additional information about an extra structure on those objects. This knowledge led us to an idea of grouping the objects into new generalized ones. Those new artificial objects were easy to visualize due to their small quantity. And they happened to be enough for the cognition of the original problem. That was a successful change of point of view. As a whole, our work belongs to a high-performance computing performance visualization area. It gains valuable attention from scientists over the whole world, for example [1-2].


MRS Bulletin ◽  
1997 ◽  
Vol 22 (10) ◽  
pp. 5-6
Author(s):  
Horst D. Simon

Recent events in the high-performance computing industry have concerned scientists and the general public regarding a crisis or a lack of leadership in the field. That concern is understandable considering the industry's history from 1993 to 1996. Cray Research, the historic leader in supercomputing technology, was unable to survive financially as an independent company and was acquired by Silicon Graphics. Two ambitious new companies that introduced new technologies in the late 1980s and early 1990s—Thinking Machines and Kendall Square Research—were commercial failures and went out of business. And Intel, which introduced its Paragon supercomputer in 1994, discontinued production only two years later.During the same time frame, scientists who had finished the laborious task of writing scientific codes to run on vector parallel supercomputers learned that those codes would have to be rewritten if they were to run on the next-generation, highly parallel architecture. Scientists who are not yet involved in high-performance computing are understandably hesitant about committing their time and energy to such an apparently unstable enterprise.However, beneath the commercial chaos of the last several years, a technological revolution has been occurring. The good news is that the revolution is over, leading to five to ten years of predictable stability, steady improvements in system performance, and increased productivity for scientific applications. It is time for scientists who were sitting on the fence to jump in and reap the benefits of the new technology.


2009 ◽  
Vol 29 (8) ◽  
pp. 2132-2135
Author(s):  
Wei PAN ◽  
Liao-yuan CHEN ◽  
Yong-ge LI ◽  
Jin-hua ZHANG ◽  
Li PAN ◽  
...  

2001 ◽  
Author(s):  
Donald J. Fabozzi ◽  
Barney II ◽  
Fugler Blaise ◽  
Koligman Joe ◽  
Jackett Mike ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document