On parallel programming methodology in GHC

Author(s):  
Kazuko Takahashi ◽  
Tadashi Kanamori
2005 ◽  
Vol 15 (3) ◽  
pp. 431-475 ◽  
Author(s):  
RITA LOOGEN ◽  
YOLANDA ORTEGA-MALLÉN ◽  
RICARDO PEÑA-MARÍ

Eden extends the non-strict functional language Haskell with constructs to control parallel evaluation of processes. Although processes are defined explicitly, communication and synchronisation issues are handled in a way transparent to the programmer. In order to offer effective support for parallel evaluation, Eden's coordination constructs override the inherently sequential demand-driven (lazy) evaluation strategy of its computation language Haskell. Eden is a general-purpose parallel functional language suitable for developing sophisticated skeletons – which simplify parallel programming immensely – as well as for exploiting more irregular parallelism that cannot easily be captured by a predefined skeleton. The paper gives a comprehensive description of Eden, its semantics, its skeleton-based programming methodology – which is applied in three case studies – its implementation and performance. Furthermore it points at many additional results that have been achieved in the context of the Eden project.


2014 ◽  
Vol 2014 ◽  
pp. 1-12 ◽  
Author(s):  
Claudia Misale ◽  
Giulio Ferrero ◽  
Massimo Torquati ◽  
Marco Aldinucci

In this paper, we advocate high-level programming methodology for next generation sequencers (NGS) alignment tools for both productivity and absolute performance. We analyse the problem of parallel alignment and review the parallelisation strategies of the most popular alignment tools, which can all be abstracted to a single parallel paradigm. We compare these tools to their porting onto the FastFlow pattern-based programming framework, which provides programmers with high-level parallel patterns. By using a high-level approach, programmers are liberated from all complex aspects of parallel programming, such as synchronisation protocols, and task scheduling, gaining more possibility for seamless performance tuning. In this work, we show some use cases in which, by using a high-level approach for parallelising NGS tools, it is possible to obtain comparable or even better absolute performance for all used datasets.


2001 ◽  
Vol 9 (2-3) ◽  
pp. 143-161 ◽  
Author(s):  
Insung Park ◽  
Michael J. Voss ◽  
Seon Wook Kim ◽  
Rudolf Eigenmann

We present our effort to provide a comprehensive parallel programming environment for the OpenMP parallel directive language. This environment includes a parallel programming methodology for the OpenMP programming model and a set of tools (Ursa Minor and InterPol) that support this methodology. Our toolset provides automated and interactive assistance to parallel programmers in time-consuming tasks of the proposed methodology. The features provided by our tools include performance and program structure visualization, interactive optimization, support for performance modeling, and performance advising for finding and correcting performance problems. The presented evaluation demonstrates that our environment offers significant support in general parallel tuning efforts and that the toolset facilitates many common tasks in OpenMP parallel programming in an efficient manner.


2011 ◽  
Author(s):  
Hahn Kim ◽  
Julia Mullen ◽  
Jeremy Kepner
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document