Matlab and Parallel Computing

2012 ◽  
Vol 17 (4) ◽  
pp. 207-216 ◽  
Author(s):  
Magdalena Szymczyk ◽  
Piotr Szymczyk

Abstract The MATLAB is a technical computing language used in a variety of fields, such as control systems, image and signal processing, visualization, financial process simulations in an easy-to-use environment. MATLAB offers "toolboxes" which are specialized libraries for variety scientific domains, and a simplified interface to high-performance libraries (LAPACK, BLAS, FFTW too). Now MATLAB is enriched by the possibility of parallel computing with the Parallel Computing ToolboxTM and MATLAB Distributed Computing ServerTM. In this article we present some of the key features of MATLAB parallel applications focused on using GPU processors for image processing.

2001 ◽  
Vol 9 (4) ◽  
pp. 211-222 ◽  
Author(s):  
Marian Bubak ◽  
Dariusz Żbik ◽  
Dick van Albada ◽  
Kamil Iskra ◽  
Peter Sloot

Efficient load balancing is essential for parallel distributed computing. Many parallel computing environments use TCP or UDP through the socket interface as a communication mechanism. This paper presents the design and development of a prototype implementation of a network interface that can preserve communication between processes during process migration. This new communication library is a substitution for the well-known socket interface. It is implemented in user — space; it is portable, and no modifications of user applications are required. TCP/IP is applied for internal communication, which guarantees relatively high performance and portability.


Author(s):  
Gennady Shvachych ◽  
Nina Rizun ◽  
Olena Kholod ◽  
Olena Ivaschenko ◽  
Volodymyr Busygin

The chapter analyzes the ways of development of high-performance computing systems. It is shown that a real breakthrough in mastering parallel computing technologies can be achieved by developing an additional (actually basic) level in the hierarchy of hardware capacities of multiprocessor computing systems of MPP-architecture, the personal computing clusters. Thus, it is proposed to create the foundation of the hardware pyramid of parallel computing technology in the form of personal computing clusters. It is shown that on the basis of multiprocessor information systems processing and control, the control systems are implemented for many industries: space industry, aviation, air defense and anti-missile defense systems, and many others. However, the production of multiprocessor information processing and control systems is hampered by high cost at all its stages. As a result, the total cost of the system often makes it as an inaccessible tool. The use of modern multiprocessor cluster systems would reduce the costs of its production.


Author(s):  
Sergio Nesmachnow ◽  
Gabriel Usera ◽  
Francisco Brasileiro

This article describes the Digi-Clima Grid project, whose main goals are to design and implement semi-automatic techniques for digitalizing and recovering historical climate records applying parallel computing techniques over distributed computing infrastructures. The specific tool developed for image processing is described, and the implementation over grid and cloud infrastructures is reported. An experimental analysis over institutional and volunteer-based grid/cloud distributed systems demonstrate that the proposed approach is an efficient tool for recovering historical climate data. The parallel implementations allow to distribute the processing load, achieving accurate speedup values.


Author(s):  
Hiroshi Yamamoto ◽  
Yasufumi Nagai ◽  
Shinichi Kimura ◽  
Hiroshi Takahashi ◽  
Satoko Mizumoto ◽  
...  

Author(s):  
Mark Endrei ◽  
Chao Jin ◽  
Minh Ngoc Dinh ◽  
David Abramson ◽  
Heidi Poxon ◽  
...  

Rising power costs and constraints are driving a growing focus on the energy efficiency of high performance computing systems. The unique characteristics of a particular system and workload and their effect on performance and energy efficiency are typically difficult for application users to assess and to control. Settings for optimum performance and energy efficiency can also diverge, so we need to identify trade-off options that guide a suitable balance between energy use and performance. We present statistical and machine learning models that only require a small number of runs to make accurate Pareto-optimal trade-off predictions using parameters that users can control. We study model training and validation using several parallel kernels and more complex workloads, including Algebraic Multigrid (AMG), Large-scale Atomic Molecular Massively Parallel Simulator, and Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. We demonstrate that we can train the models using as few as 12 runs, with prediction error of less than 10%. Our AMG results identify trade-off options that provide up to 45% improvement in energy efficiency for around 10% performance loss. We reduce the sample measurement time required for AMG by 90%, from 13 h to 74 min.


Sign in / Sign up

Export Citation Format

Share Document