parallel formulation
Recently Published Documents


TOTAL DOCUMENTS

20
(FIVE YEARS 1)

H-INDEX

7
(FIVE YEARS 0)

2021 ◽  
Vol 8 (4) ◽  
pp. 1-19
Author(s):  
Xuejiao Kang ◽  
David F. Gleich ◽  
Ahmed Sameh ◽  
Ananth Grama

As parallel and distributed systems scale, fault tolerance is an increasingly important problem—particularly on systems with limited I/O capacity and bandwidth. Erasure coded computations address this problem by augmenting a given problem instance with redundant data and then solving the augmented problem in a fault oblivious manner in a faulty parallel environment. In the event of faults, a computationally inexpensive procedure is used to compute the true solution from a potentially fault-prone solution. These techniques are significantly more efficient than conventional solutions to the fault tolerance problem. In this article, we show how we can minimize, to optimality, the overhead associated with our problem augmentation techniques for linear system solvers. Specifically, we present a technique that adaptively augments the problem only when faults are detected. At any point in execution, we only solve a system whose size is identical to the original input system. This has several advantages in terms of maintaining the size and conditioning of the system, as well as in only adding the minimal amount of computation needed to tolerate observed faults. We present, in detail, the augmentation process, the parallel formulation, and evaluation of performance of our technique. Specifically, we show that the proposed adaptive fault tolerance mechanism has minimal overhead in terms of FLOP counts with respect to the original solver executing in a non-faulty environment, has good convergence properties, and yields excellent parallel performance. We also demonstrate that our approach significantly outperforms an optimized application-level checkpointing scheme that only checkpoints needed data structures.


2017 ◽  
Author(s):  
Molly Lewis

The claim of linguistic relativity, broadly, contains two components: (i) languages differ in the way they divide the world into meaningful units, and (ii) the way a language divides the world into meaningful units influences habitual thought. The conjunction of these two components yields the central claim of linguistic relativity: Speakers of different languages live in different habitual thought worlds. Theorists have addressed linguistic relativity in two different domains of linguistic meaning: symbolic and indexical. Roughly, these two domains can be understood as context- independent and context-dependent meaning, respectively. Benjamin Lee Whorf is the central figure in the symbolic domain of linguistic relativity (“Whorfian linguistic relativity”). Michael Silverstein is responsible for the extension of linguistic relativity into the indexical domain (“Neo-Whorfian linguistic relativity”). While theoretically parallel, the two theories are framed in terms that mask their underlying commonality. The present project is thus to formulate both theories within a single coherent framework. I present a framework that rests on a semiotic principle proposed by Silverstein in the Neo-Whorfian domain: The principle of unavoidable referentiality. This principle states that speakers will be less susceptible to the ‘thought grooves’ of their language when the form of the meaning at issue coincides with a form with denotational value. By framing linguistic relativity in the symbolic realm in terms of metaphor, I argue that the principle of unavoidable referentiality can be applied to Whorfian theory. A formulation of the two theories within a common framework highlights their inter-relationship and, ultimately, allows both to be situated within a single theory of culture.


2015 ◽  
Vol 35 (8) ◽  
pp. 928-942 ◽  
Author(s):  
Taekhee Lee ◽  
Young J. Kim

We present new parallel algorithms that solve continuous-state partially observable Markov decision process (POMDP) problems using the GPU (gPOMDP) and a hybrid of the GPU and CPU (hPOMDP). We choose the Monte Carlo value iteration (MCVI) method as our base algorithm and parallelize this algorithm using the multi-level parallel formulation of MCVI. For each parallel level, we propose efficient algorithms to utilize the massive data parallelism available on modern GPUs. Our GPU-based method uses the two workload distribution techniques, compute/data interleaving and workload balancing, in order to obtain the maximum parallel performance at the highest level. Here we also present a CPU–GPU hybrid method that takes advantage of both CPU and GPU parallelism in order to solve highly complex POMDP planning problems. The CPU is responsible for data preparation, while the GPU performs Monte Cacrlo simulations; these operations are performed concurrently using the compute/data overlap technique between the CPU and GPU. To the best of the authors’ knowledge, our algorithms are the first parallel algorithms that efficiently execute POMDP in a massively parallel fashion utilizing the GPU or a hybrid of the GPU and CPU. Our algorithms outperform the existing CPU-based algorithm by a factor of 75–99 based on the chosen benchmark.


2011 ◽  
Vol 35 (2) ◽  
pp. 207-214 ◽  
Author(s):  
Stefano Longo ◽  
Eric C. Kerrigan ◽  
Keck Voon Ling ◽  
George A. Constantinides

Author(s):  
Janusz Kogut ◽  
Henryk Ciurej

A vehicle-track-soil dynamic interaction problem in sequential and parallel formulationSome problems regarding numerical modeling of predicted vibrations excited by railway traffic are discussed. Model formulation in the field of structural mechanics comprises a vehicle, a track (often in a tunnel) and soil. Time consuming computations are needed to update large matrices at every discrete step. At first, a sequential Matlab code is generated. Later on, the formulation is modified to use grid computing, thereby a significant reduction in computational time is expected.


Author(s):  
Sukhpreet S. Sandhu ◽  
Ramdev Kanapady ◽  
Kumar K. Tamma

In this paper a highly scalable parallel formulation of the primal-dual technique is presented for index-3 constrained flexible multi-body dynamics system. The key features of the primal-dual approach are constraint preservation, preserving the original order of accuracy of time integration operators that are employed, and faster convergence rates of nonlinear iterations for the solution of flexible multi-body dynamical systems. In addition, this technique not only preserves the underlying properties of time integration operators for ordinary differential equations, but also eliminates the need for index reduction, constraint stabilization and regularization approaches. The key features of the parallel formulation of rigid and flexible modeling and simulation technology are capabilities such as adaptive high/low fidelity modeling that is useful from the initial design concept stage to the intermediate and to the final design stages in a single seamless simulation environment. The examples considered illustrate the capabilities and scalability of the proposed high performance computing (HPC) approach for large-scale simulations.


Sign in / Sign up

Export Citation Format

Share Document