A study of equation solvers for linear and non-linear finite elementanalysis on parallel processing computers

Author(s):  
BRIAN WATSON ◽  
MANOHAR KAMAT
Author(s):  
Sambhu Prasad Panda ◽  
Madhusmita Sahu ◽  
Umesh Prasad Rout ◽  
Surendra Kumar Nanda

In this paper we present the equivalence of the operations involved in DES and AES algorithm with operations of cellular automata. We identify all the permutation and substitution operations involved in DES and AES algorithm and compare these operations with the cellular automata rules. Then we find that permutation operations involved in DES and AES are equivalent to linear cellular automata rules providing diffusion property of cryptography whereas substitution operations involved in DES and AES are equivalent to non linear cellular automata rules providing the confusion property of cryptography. Hence instead of using operations involved in DES and AES algorithm, we can apply linear as well as non-linear cellular automata rules in cryptography for better security and parallel processing.


2020 ◽  
Vol 221 (2) ◽  
pp. 905-927
Author(s):  
Ioannis E Venetis ◽  
Vasso Saltogianni ◽  
Stathis Stiros ◽  
Efstratios Gallopoulos

SUMMARY Exhaustive searches in regular grids is a traditional and effective method for inversion, that is numerical solution of systems of non-linear equations which cannot be solved using formal algebraic techniques. However, this technique is effective for very few (3–4) variables and is slow. Recently, the first limitation was to a major degree overpassed with the new TOPological INVersion (TOPINV) algorithm which was used for inversion of systems with up to 18, or even more unknown variables. The novelty of this algorithm is that it is not based on the principle of the mean minimum misfit (cost function) between observations and model predictions, used by most inversion techniques. The new algorithm investigates for each gridpoint whether misfits of each observation are within specified uncertainty intervals, and stores clusters of ‘successful’ gridpoints in matrix form. These clusters (ensembles, sets) of gridpoints are tested whether they satisfy certain criteria and are then used to compute one or more optimal statistical solutions. The new algorithm is efficient for highly non-linear problems with high measurement uncertainties (low signal-to-noise ratio, SNR) and poor distribution of observations, that is problems leading to complicated 3-D mean misfit surfaces without dominant peaks, but it is slow when running in common computers. To overcome this limitation, we used GPUs which permit parallel processing in common computers, but faced another computational problem: GPU parallel processing supports only up to three dimensions. To solve this problem, we used CUDA programming and optimized the distribution of the computational load to all GPU cores. This leads up to 100x speedup relative to common CPU processing, as is derived from comparative tests with synthetic data for two typical inversion geophysical problems with up to 18 unknown variables, Mogi magma source modeling and elastic dislocation modeling of seismic faults. This impressive speedup makes the GPU/CUDA implementation of TOPINV practical even for low-latency solution of certain geophysical problems. This speedup in calculations also permitted to investigate the performance of the new algorithm in relation to the density of the adopted grids. We focused on a typical problem of elastic dislocation in unfavorable conditions (poor observations geometry, data with low SNR) and on synthetic observations with noise, so that the difference of each solution from the ‘true’/reference value was known (accuracy-based approach). Application of the algorithm revealed stable, accurate and precise solutions, with quality increasing with the grid density. Solution defects (bias), mainly produced by very coarse grids, can be identified through specific diagnostic criteria, dictating finer search grids.


1967 ◽  
Vol 28 ◽  
pp. 105-176
Author(s):  
Robert F. Christy

(Ed. note: The custom in these Symposia has been to have a summary-introductory presentation which lasts about 1 to 1.5 hours, during which discussion from the floor is minor and usually directed at technical clarification. The remainder of the session is then devoted to discussion of the whole subject, oriented around the summary-introduction. The preceding session, I-A, at Nice, followed this pattern. Christy suggested that we might experiment in his presentation with a much more informal approach, allowing considerable discussion of the points raised in the summary-introduction during its presentation, with perhaps the entire morning spent in this way, reserving the afternoon session for discussion only. At Varenna, in the Fourth Symposium, several of the summaryintroductory papers presented from the astronomical viewpoint had been so full of concepts unfamiliar to a number of the aerodynamicists-physicists present, that a major part of the following discussion session had been devoted to simply clarifying concepts and then repeating a considerable amount of what had been summarized. So, always looking for alternatives which help to increase the understanding between the different disciplines by introducing clarification of concept as expeditiously as possible, we tried Christy's suggestion. Thus you will find the pattern of the following different from that in session I-A. I am much indebted to Christy for extensive collaboration in editing the resulting combined presentation and discussion. As always, however, I have taken upon myself the responsibility for the final editing, and so all shortcomings are on my head.)


Sign in / Sign up

Export Citation Format

Share Document