scholarly journals A New Vectorization Technique for Expression Templates in C++

2012 ◽  
Vol 10 (4) ◽  
Author(s):  
J Progsch ◽  
Y Ineichen ◽  
A Adelmann

Vector operations play an important role in high performance computing and are typically provided by highly optimized libraries that implement the Basic Linear Algebra Subprograms (BLAS) interface. In C++ templates and operator overloading allow the implementation of these vector operations as expression templates which construct custom loops at compile time and providing a more abstract interface. Unfortunately existing expression template libraries lack the performance of fast BLAS implementations. This paper presents a new approach - Statically Accelerated Loop Templates (SALT) - to close this performance gap by combining expression templates with an aggressive loop unrolling technique. Benchmarks were conducted using the Intel C++ compiler and GNU Compiler Collection to assess the performance of our library relative to Intel's Math Kernel Library as well as the Eigen template library. The results show that the approach is able to provide optimization comparable to the fastest available BLAS implementations, while retaining the convenience and flexibility of a template library.

2016 ◽  
Vol 78 (6-3) ◽  
Author(s):  
Tatiana Zudilova ◽  
Svetlana Odinochkina ◽  
Victor Prygun

This paper presents a new approach to the organization of ICT training courses on the basis of the designed and developed a private training cloud prototype. The developed prototype was built on the Microsoft System Center 2012 resources, which allowed consolidating high-performance computing tools, combined different classes of storage devices and provided these resources on demand. We describe the implementation of SaaS private cloud model for ICT user courses and PaaS model - for ICT programming courses.


Author(s):  
N. M. Zalutskaya ◽  
A. Eran ◽  
Sh. Freilikhman ◽  
R. Balicer ◽  
N. A. Gomzyakova ◽  
...  

The work annotates the goals and objectives of the planned joint Russian-Israeli research project aimed at a comprehensive assessment of the data obtained during the examination of patients with mild cognitive decline and autism spectrum disorders. The process of their analysis will be based on complex methods, the effective use of which requires readily available means of operating with clinical and biological data, which, in turn, can be implemented through modern cloud and high-performance computing technologies. It is planned to use the new approach associated with the use of newSQL database as an API, and then use the distributed computing tools for working with heterogeneous data, which will lead to features in the analysis of correlations in multidimensional data arrays. For this purpose it is planned to use the methods of multidimensional statistical analysis and modern methods of machine learning.


Author(s):  
Mark H. Ellisman

The increased availability of High Performance Computing and Communications (HPCC) offers scientists and students the potential for effective remote interactive use of centralized, specialized, and expensive instrumentation and computers. Examples of instruments capable of remote operation that may be usefully controlled from a distance are increasing. Some in current use include telescopes, networks of remote geophysical sensing devices and more recently, the intermediate high voltage electron microscope developed at the San Diego Microscopy and Imaging Resource (SDMIR) in La Jolla. In this presentation the imaging capabilities of a specially designed JEOL 4000EX IVEM will be described. This instrument was developed mainly to facilitate the extraction of 3-dimensional information from thick sections. In addition, progress will be described on a project now underway to develop a more advanced version of the Telemicroscopy software we previously demonstrated as a tool to for providing remote access to this IVEM (Mercurio et al., 1992; Fan et al., 1992).


MRS Bulletin ◽  
1997 ◽  
Vol 22 (10) ◽  
pp. 5-6
Author(s):  
Horst D. Simon

Recent events in the high-performance computing industry have concerned scientists and the general public regarding a crisis or a lack of leadership in the field. That concern is understandable considering the industry's history from 1993 to 1996. Cray Research, the historic leader in supercomputing technology, was unable to survive financially as an independent company and was acquired by Silicon Graphics. Two ambitious new companies that introduced new technologies in the late 1980s and early 1990s—Thinking Machines and Kendall Square Research—were commercial failures and went out of business. And Intel, which introduced its Paragon supercomputer in 1994, discontinued production only two years later.During the same time frame, scientists who had finished the laborious task of writing scientific codes to run on vector parallel supercomputers learned that those codes would have to be rewritten if they were to run on the next-generation, highly parallel architecture. Scientists who are not yet involved in high-performance computing are understandably hesitant about committing their time and energy to such an apparently unstable enterprise.However, beneath the commercial chaos of the last several years, a technological revolution has been occurring. The good news is that the revolution is over, leading to five to ten years of predictable stability, steady improvements in system performance, and increased productivity for scientific applications. It is time for scientists who were sitting on the fence to jump in and reap the benefits of the new technology.


Sign in / Sign up

Export Citation Format

Share Document