LEARNING SOLUTIONS WITH CLOUD TECHNOLOGIES

2016 ◽  
Vol 78 (6-3) ◽  
Author(s):  
Tatiana Zudilova ◽  
Svetlana Odinochkina ◽  
Victor Prygun

This paper presents a new approach to the organization of ICT training courses on the basis of the designed and developed a private training cloud prototype. The developed prototype was built on the Microsoft System Center 2012 resources, which allowed consolidating high-performance computing tools, combined different classes of storage devices and provided these resources on demand. We describe the implementation of SaaS private cloud model for ICT user courses and PaaS model - for ICT programming courses.

2012 ◽  
Vol 10 (4) ◽  
Author(s):  
J Progsch ◽  
Y Ineichen ◽  
A Adelmann

Vector operations play an important role in high performance computing and are typically provided by highly optimized libraries that implement the Basic Linear Algebra Subprograms (BLAS) interface. In C++ templates and operator overloading allow the implementation of these vector operations as expression templates which construct custom loops at compile time and providing a more abstract interface. Unfortunately existing expression template libraries lack the performance of fast BLAS implementations. This paper presents a new approach - Statically Accelerated Loop Templates (SALT) - to close this performance gap by combining expression templates with an aggressive loop unrolling technique. Benchmarks were conducted using the Intel C++ compiler and GNU Compiler Collection to assess the performance of our library relative to Intel's Math Kernel Library as well as the Eigen template library. The results show that the approach is able to provide optimization comparable to the fastest available BLAS implementations, while retaining the convenience and flexibility of a template library.


2021 ◽  
Vol 17 (1) ◽  
pp. 1-22
Author(s):  
Wen Cheng ◽  
Chunyan Li ◽  
Lingfang Zeng ◽  
Yingjin Qian ◽  
Xi Li ◽  
...  

In high-performance computing (HPC), data and metadata are stored on special server nodes and client applications access the servers’ data and metadata through a network, which induces network latencies and resource contention. These server nodes are typically equipped with (slow) magnetic disks, while the client nodes store temporary data on fast SSDs or even on non-volatile main memory (NVMM). Therefore, the full potential of parallel file systems can only be reached if fast client side storage devices are included into the overall storage architecture. In this article, we propose an NVMM-based hierarchical persistent client cache for the Lustre file system (NVMM-LPCC for short). NVMM-LPCC implements two caching modes: a read and write mode (RW-NVMM-LPCC for short) and a read only mode (RO-NVMM-LPCC for short). NVMM-LPCC integrates with the Lustre Hierarchical Storage Management (HSM) solution and the Lustre layout lock mechanism to provide consistent persistent caching services for I/O applications running on client nodes, meanwhile maintaining a global unified namespace of the entire Lustre file system. The evaluation results presented in this article show that NVMM-LPCC can increase the average read throughput by up to 35.80 times and the average write throughput by up to 9.83 times compared with the native Lustre system, while providing excellent scalability.


Author(s):  
N. M. Zalutskaya ◽  
A. Eran ◽  
Sh. Freilikhman ◽  
R. Balicer ◽  
N. A. Gomzyakova ◽  
...  

The work annotates the goals and objectives of the planned joint Russian-Israeli research project aimed at a comprehensive assessment of the data obtained during the examination of patients with mild cognitive decline and autism spectrum disorders. The process of their analysis will be based on complex methods, the effective use of which requires readily available means of operating with clinical and biological data, which, in turn, can be implemented through modern cloud and high-performance computing technologies. It is planned to use the new approach associated with the use of newSQL database as an API, and then use the distributed computing tools for working with heterogeneous data, which will lead to features in the analysis of correlations in multidimensional data arrays. For this purpose it is planned to use the methods of multidimensional statistical analysis and modern methods of machine learning.


2017 ◽  
Vol 10 (13) ◽  
pp. 445
Author(s):  
Purvi Pathak ◽  
Kumar R

High-performance computing (HPC) applications require high-end computing systems, but not all scientists have access to such powerful systems. Cloud computing provides an opportunity to run these applications on the cloud without the requirement of investing in high-end parallel computing systems. We can analyze the performance of the HPC applications on private as well as public clouds. The performance of the workload on the cloud can be calculated using different benchmarking tools such as NAS parallel benchmarking and Rally. The workloads of HPC applications require use of many parallel computing systems to be run on a physical setup, but this facility is available on cloud computing environment without the need of investing in physical machines. We aim to analyze the ability of the cloud to perform well when running HPC workloads. We shall get the detailed performance of the cloud when running these applications on a private cloud and find the pros and cons of running HPC workloads on cloud environment.


Author(s):  
J. Charles Victor ◽  
P. Alison Paprica ◽  
Michael Brudno ◽  
Carl Virtanen ◽  
Walter Wodchis ◽  
...  

IntroductionCanadian provincial health systems have a data advantage – longitudinal population-wide data for publicly funded health services, in many cases going back 20 years or more. With the addition of high performance computing (HPC), these data can serve as the foundation for leading-edge research using machine learning and artificial intelligence. Objectives and ApproachThe Institute for Clinical Evaluative Sciences (ICES) and HPC4Health are creating the Ontario Data Safe Haven (ODSH) – a secure HPC cloud located within the HPC4Health physical environment at the Hospital for Sick Children in Toronto. The ODSH will allow research teams to post, access and analyze individual datasets over which they have authority, and enable linkage to Ontario administrative and other data. To start, the ODSH is focused on creating a private cloud meeting ICES’ legislated privacy and security requirements to support HPC-intensive analyses of ICES data. The first ODSH projects are partnerships between ICES scientists and machine learning. ResultsAs of March 2018, the technological build of the ODSH was tested and completed and the privacy and security policy framework and documentation were completed. We will present the structure of the ODSH, including the architectural choices made when designing the environment, and planned functionality in the future. We will describe the experience to-date for the very first analysis done using the ODSH: the automatic mining of clinical terminology in primary care electronic medical records using deep neural networks. We will also present the plans for a high-cost user Risk Dashboard program of research, co-designed by ICES scientists and health faculty from the Vector Institute for artificial intelligence, that will make use of the ODSH beginning May 2018. Conclusion/ImplicationsThrough a partnership of ICES, HPC4Health and the Vector Institute, a secure private cloud ODSH has been created as is starting to be used in leading edge machine learning research studies that make use of Ontario’s population-wide data assets.


2020 ◽  
Vol 27 (4) ◽  
pp. 45-62
Author(s):  
Maicon Ança dos Santos ◽  
Gerson Geraldo H. Cavalheiro

With the consolidation of cloud computing technology, there is a growing interest in exploring it to support High Performance Computing (HPC). However, migrating such applications to public or private cloud environments brings some challenges, in particular, the cost in financing the migration process. In this paper, a literature review is presented with selected papers about analyzing cloud infrastructure investments. In particular, the selected papers analyse how investments impact applications. For discussion of related works, conditions for running HPC applications in the cloud are characterized.


Sign in / Sign up

Export Citation Format

Share Document