scholarly journals Applications of Machine Learning and High-Performance Computing in the Era of COVID-19

2021 ◽  
Vol 4 (3) ◽  
pp. 40
Author(s):  
Abdul Majeed

During the ongoing pandemic of the novel coronavirus disease 2019 (COVID-19), latest technologies such as artificial intelligence (AI), blockchain, learning paradigms (machine, deep, smart, few short, extreme learning, etc.), high-performance computing (HPC), Internet of Medical Things (IoMT), and Industry 4.0 have played a vital role. These technologies helped to contain the disease’s spread by predicting contaminated people/places, as well as forecasting future trends. In this article, we provide insights into the applications of machine learning (ML) and high-performance computing (HPC) in the era of COVID-19. We discuss the person-specific data that are being collected to lower the COVID-19 spread and highlight the remarkable opportunities it provides for knowledge extraction leveraging low-cost ML and HPC techniques. We demonstrate the role of ML and HPC in the context of the COVID-19 era with the successful implementation or proposition in three contexts: (i) ML and HPC use in the data life cycle, (ii) ML and HPC use in analytics on COVID-19 data, and (iii) the general-purpose applications of both techniques in COVID-19’s arena. In addition, we discuss the privacy and security issues and architecture of the prototype system to demonstrate the proposed research. Finally, we discuss the challenges of the available data and highlight the issues that hinder the applicability of ML and HPC solutions on it.

Author(s):  
J. Charles Victor ◽  
P. Alison Paprica ◽  
Michael Brudno ◽  
Carl Virtanen ◽  
Walter Wodchis ◽  
...  

IntroductionCanadian provincial health systems have a data advantage – longitudinal population-wide data for publicly funded health services, in many cases going back 20 years or more. With the addition of high performance computing (HPC), these data can serve as the foundation for leading-edge research using machine learning and artificial intelligence. Objectives and ApproachThe Institute for Clinical Evaluative Sciences (ICES) and HPC4Health are creating the Ontario Data Safe Haven (ODSH) – a secure HPC cloud located within the HPC4Health physical environment at the Hospital for Sick Children in Toronto. The ODSH will allow research teams to post, access and analyze individual datasets over which they have authority, and enable linkage to Ontario administrative and other data. To start, the ODSH is focused on creating a private cloud meeting ICES’ legislated privacy and security requirements to support HPC-intensive analyses of ICES data. The first ODSH projects are partnerships between ICES scientists and machine learning. ResultsAs of March 2018, the technological build of the ODSH was tested and completed and the privacy and security policy framework and documentation were completed. We will present the structure of the ODSH, including the architectural choices made when designing the environment, and planned functionality in the future. We will describe the experience to-date for the very first analysis done using the ODSH: the automatic mining of clinical terminology in primary care electronic medical records using deep neural networks. We will also present the plans for a high-cost user Risk Dashboard program of research, co-designed by ICES scientists and health faculty from the Vector Institute for artificial intelligence, that will make use of the ODSH beginning May 2018. Conclusion/ImplicationsThrough a partnership of ICES, HPC4Health and the Vector Institute, a secure private cloud ODSH has been created as is starting to be used in leading edge machine learning research studies that make use of Ontario’s population-wide data assets.


2021 ◽  
Vol 18 (3) ◽  
pp. 1-26
Author(s):  
Daniel Thuerck ◽  
Nicolas Weber ◽  
Roberto Bifulco

A large portion of the recent performance increase in the High Performance Computing (HPC) and Machine Learning (ML) domains is fueled by accelerator cards. Many popular ML frameworks support accelerators by organizing computations as a computational graph over a set of highly optimized, batched general-purpose kernels. While this approach simplifies the kernels’ implementation for each individual accelerator, the increasing heterogeneity among accelerator architectures for HPC complicates the creation of portable and extensible libraries of such kernels. Therefore, using a generalization of the CUDA community’s warp register cache programming idiom, we propose a new programming idiom (CoRe) and a virtual architecture model (PIRCH), abstracting over SIMD and SIMT paradigms. We define and automate the mapping process from a single source to PIRCH’s intermediate representation and develop backends that issue code for three different architectures: Intel AVX512, NVIDIA GPUs, and NEC SX-Aurora. Code generated by our source-to-source compiler for batched kernels, borG, competes favorably with vendor-tuned libraries and is up to 2× faster than hand-tuned kernels across architectures.


Graphics Accelerators are increasingly used for general purpose high performance computing applications as they provide a low cost solution to high performance computing requirements. Intel also came out with a performance accelerator that offers a similar solution. However, the existing application software needs to be restructured to suit to the accelerator paradigm with a suitable software architecture pattern. In the present work, master-slave architecture is employed to convert CFD grid free Euler solvers in CUDA for GPGPU computing. The performance obtained using master-slave architecture for GPGPU computing is compared with that of sequential computing results.


Author(s):  
Chun-Yuan Lin ◽  
Jin Ye ◽  
Che-Lun Hung ◽  
Chung-Hung Wang ◽  
Min Su ◽  
...  

Current high-end graphics processing units (abbreviate to GPUs), such as NVIDIA Tesla, Fermi, Kepler series cards which contain up to thousand cores per-chip, are widely used in the high performance computing fields. These GPU cards (called desktop GPUs) should be installed in personal computers/servers with desktop CPUs; moreover, the cost and power consumption of constructing a high performance computing platform with these desktop CPUs and GPUs are high. NVIDIA releases Tegra K1, called Jetson TK1, which contains 4 ARM Cortex-A15 CPUs and 192 CUDA cores (Kepler GPU) and is an embedded board with low cost, low power consumption and high applicability advantages for embedded applications. NVIDIA Jetson TK1 becomes a new research direction. Hence, in this paper, a bioinformatics platform was constructed based on NVIDIA Jetson TK1. ClustalWtk and MCCtk tools for sequence alignment and compound comparison were designed on this platform, respectively. Moreover, the web and mobile services for these two tools with user friendly interfaces also were provided. The experimental results showed that the cost-performance ratio by NVIDIA Jetson TK1 is higher than that by Intel XEON E5-2650 CPU and NVIDIA Tesla K20m GPU card.


2019 ◽  
Vol 16 (2) ◽  
pp. 541-564
Author(s):  
Mathias Longo ◽  
Ana Rodriguez ◽  
Cristian Mateos ◽  
Alejandro Zunino

In-silico research has grown considerably. Today?s scientific code involves long-running computer simulations and hence powerful computing infrastructures are needed. Traditionally, research in high-performance computing has focused on executing code as fast as possible, while energy has been recently recognized as another goal to consider. Yet, energy-driven research has mostly focused on the hardware and middleware layers, but few efforts target the application level, where many energy-aware optimizations are possible. We revisit a catalog of Java primitives commonly used in OO scientific programming, or micro-benchmarks, to identify energy-friendly versions of the same primitive. We then apply the micro-benchmarks to classical scientific application kernels and machine learning algorithms for both single-thread and multi-thread implementations on a server. Energy usage reductions at the micro-benchmark level are substantial, while for applications obtained reductions range from 3.90% to 99.18%.


10.29007/8d25 ◽  
2019 ◽  
Author(s):  
J J Hernández-Gómez ◽  
G A Yañez-Casas ◽  
Alejandro M Torres-Lara ◽  
C Couder-Castañeda ◽  
M G Orozco-del-Castillo ◽  
...  

Nowadays, remote sensing data taken from artificial satellites require high space com- munications bandwidths as well as high computational processing burdens due to the vertiginous development and specialisation of on-board payloads specifically designed for remote sensing purposes. Nevertheless, these factors become a severe problem when con- sidering nanosatellites, particularly those based in the CubeSat standard, due to the strong limitations that it imposes in volume, power and mass. Thus, the applications of remote sensing in this class of satellites, widely sought due to their affordable cost and easiness of construction and deployment, are very restricted due to their very limited on-board computer power, notwithstanding their Low Earth Orbits (LEO) which make them ideal for Earth’s remote sensing. In this work we present the feasibility of the integration of an NVIDIA GPU of low mass and power as the on-board computer for 1-3U CubeSats. From the remote sensing point of view, we present nine processing-intensive algorithms very commonly used for the processing of remote sensing data which can be executed on-board on this platform. In this sense, we present the performance of these algorithms on the proposed on-board computer with respect with a typical on-board computer for CubeSats (ARM Cortex-A57 MP Core Processor), showing that they have acceleration factors of average of 14.04× ∼14.72× in average. This study sets the precedent to perform satellite on-board high performance computing so to widen the remote sensing capabilities of CubeSats.


Sign in / Sign up

Export Citation Format

Share Document