Tuning the ANSYS kernel LSOLVE for a parallel computer

Author(s):  
R.E. Hessel ◽  
M. Myszewski ◽  
G. Brussino ◽  
J.A. Swanson ◽  
L. Wagner
Keyword(s):  
Author(s):  
Jose-Maria Carazo ◽  
I. Benavides ◽  
S. Marco ◽  
J.L. Carrascosa ◽  
E.L. Zapata

Obtaining the three-dimensional (3D) structure of negatively stained biological specimens at a resolution of, typically, 2 - 4 nm is becoming a relatively common practice in an increasing number of laboratories. A combination of new conceptual approaches, new software tools, and faster computers have made this situation possible. However, all these 3D reconstruction processes are quite computer intensive, and the middle term future is full of suggestions entailing an even greater need of computing power. Up to now all published 3D reconstructions in this field have been performed on conventional (sequential) computers, but it is a fact that new parallel computer architectures represent the potential of order-of-magnitude increases in computing power and should, therefore, be considered for their possible application in the most computing intensive tasks.We have studied both shared-memory-based computer architectures, like the BBN Butterfly, and local-memory-based architectures, mainly hypercubes implemented on transputers, where we have used the algorithmic mapping method proposed by Zapata el at. In this work we have developed the basic software tools needed to obtain a 3D reconstruction from non-crystalline specimens (“single particles”) using the so-called Random Conical Tilt Series Method. We start from a pair of images presenting the same field, first tilted (by ≃55°) and then untilted. It is then assumed that we can supply the system with the image of the particle we are looking for (ideally, a 2D average from a previous study) and with a matrix describing the geometrical relationships between the tilted and untilted fields (this step is now accomplished by interactively marking a few pairs of corresponding features in the two fields). From here on the 3D reconstruction process may be run automatically.


1991 ◽  
Author(s):  
J. M. McDonough ◽  
E. C. Hylin ◽  
Tony F. Chan ◽  
Matthew T. Chan ◽  
Y. Yang ◽  
...  

2021 ◽  
Vol 179 ◽  
pp. 590-597
Author(s):  
Maryam Manaa Al-Shammari ◽  
Asrar Haque ◽  
M.M. Hafizur Rahman

1993 ◽  
Vol 11 (3-4) ◽  
pp. 227-249 ◽  
Author(s):  
William J. Dally

Systems ◽  
2019 ◽  
Vol 7 (1) ◽  
pp. 6
Author(s):  
Allen D. Parks ◽  
David J. Marchette

The Müller-Wichards model (MW) is an algebraic method that quantitatively estimates the performance of sequential and/or parallel computer applications. Because of category theory’s expressive power and mathematical precision, a category theoretic reformulation of MW, i.e., CMW, is presented in this paper. The CMW is effectively numerically equivalent to MW and can be used to estimate the performance of any system that can be represented as numerical sequences of arithmetic, data movement, and delay processes. The CMW fundamental symmetry group is introduced and CMW’s category theoretic formalism is used to facilitate the identification of associated model invariants. The formalism also yields a natural approach to dividing systems into subsystems in a manner that preserves performance. Closed form models are developed and studied statistically, and special case closed form models are used to abstractly quantify the effect of parallelization upon processing time vs. loading, as well as to establish a system performance stationary action principle.


Sign in / Sign up

Export Citation Format

Share Document