scholarly journals Advancing HIV Vaccine Research With Low-Cost High-Performance Computing Infrastructure: An Alternative Approach for Resource-Limited Settings

2019 ◽  
Vol 13 ◽  
pp. 117793221988234
Author(s):  
Batsirai M Mabvakure ◽  
Raymond Rott ◽  
Leslie Dobrowsky ◽  
Peter Van Heusden ◽  
Lynn Morris ◽  
...  

Next-generation sequencing (NGS) technologies have revolutionized biological research by generating genomic data that were once unaffordable by traditional first-generation sequencing technologies. These sequencing methodologies provide an opportunity for in-depth analyses of host and pathogen genomes as they are able to sequence millions of templates at a time. However, these large datasets can only be efficiently explored using bioinformatics analyses requiring huge data storage and computational resources adapted for high-performance processing. High-performance computing allows for efficient handling of large data and tasks that may require multi-threading and prolonged computational times, which is not feasible with ordinary computers. However, high-performance computing resources are costly and therefore not always readily available in low-income settings. We describe the establishment of an affordable high-performance computing bioinformatics cluster consisting of 3 nodes, constructed using ordinary desktop computers and open-source software including Linux Fedora, SLURM Workload Manager, and the Conda package manager. For the analysis of large antibody sequence datasets and for complex viral phylodynamic analyses, the cluster out-performed desktop computers. This has demonstrated that it is possible to construct high-performance computing capacity capable of analyzing large NGS data from relatively low-cost hardware and entirely free (open-source) software, even in resource-limited settings. Such a cluster design has broad utility beyond bioinformatics to other studies that require high-performance computing.

2014 ◽  
Author(s):  
Fabien Vivodtzev ◽  
Thierry Carrard

In order to guaranty performances of complex systems using numerical simulation, CEA is performing advanced data analysis and scientific visualization with open source software using High Performance Computing (HPC) capability. The diversity of the physics to study produces results of growing complexity in terms of large-scale, high dimensional and multivariate data. Moreover, the HPC approach introduces another layer of complexity by allowing computation amongst thousands of remote cores accessed from sites located hundreds of kilometers away from the computing facility. This paper presents how CEA deploys and contributes to open source software to enable production class visualization tools in a high performance computing context. Among several open source projects used at CEA, this presentation will focus on Visit, VTK and Paraview. In the first part we will address specific issues encountered when deploying VisIt and Paraview in a multi-site supercomputing facility for end-users. Several examples will be given on how such tools can be adapted to take advantage of a parallel setting to explore large multi-block dataset or perform remote visualization on material interface reconstructions of billions of cells. Then, the specific challenges faced to deliver Paraview’s Catalyst capabilities to end-users will be discussed. In the second part, we will describe how CEA contributes to open source visualization software and associated software development strategy by emphasizing on two recent development projects. The first is an integrated simulation workbench providing plugins for every step required to achieve numerical simulation independently on a local or a remote computer. Embedded in an Eclipse RCP environment, VTK views allow the users to perform data input using interaction or mesh preview before running the simulation code. Contributions to VTK have been made in order to smoothly integrate these technologies. The second details how recent developments at CEA have helped to visualize and to analyze results from ExaStamp, a parallel molecular dynamics simulation code dealing with molecular systems ranging from a few millions up to a billion atoms. These developments include a GPU intensive rendering method specialized for atoms and specific parallel algorithms to process molecular data sets.


2014 ◽  
Vol 20 (S3) ◽  
pp. 774-775
Author(s):  
Terry S. Yoo ◽  
Bradley C. Lowekamp ◽  
Oleg Kuybeda ◽  
Kedar Narayan ◽  
Gabriel A. Frank ◽  
...  

Author(s):  
Chun-Yuan Lin ◽  
Jin Ye ◽  
Che-Lun Hung ◽  
Chung-Hung Wang ◽  
Min Su ◽  
...  

Current high-end graphics processing units (abbreviate to GPUs), such as NVIDIA Tesla, Fermi, Kepler series cards which contain up to thousand cores per-chip, are widely used in the high performance computing fields. These GPU cards (called desktop GPUs) should be installed in personal computers/servers with desktop CPUs; moreover, the cost and power consumption of constructing a high performance computing platform with these desktop CPUs and GPUs are high. NVIDIA releases Tegra K1, called Jetson TK1, which contains 4 ARM Cortex-A15 CPUs and 192 CUDA cores (Kepler GPU) and is an embedded board with low cost, low power consumption and high applicability advantages for embedded applications. NVIDIA Jetson TK1 becomes a new research direction. Hence, in this paper, a bioinformatics platform was constructed based on NVIDIA Jetson TK1. ClustalWtk and MCCtk tools for sequence alignment and compound comparison were designed on this platform, respectively. Moreover, the web and mobile services for these two tools with user friendly interfaces also were provided. The experimental results showed that the cost-performance ratio by NVIDIA Jetson TK1 is higher than that by Intel XEON E5-2650 CPU and NVIDIA Tesla K20m GPU card.


10.29007/8d25 ◽  
2019 ◽  
Author(s):  
J J Hernández-Gómez ◽  
G A Yañez-Casas ◽  
Alejandro M Torres-Lara ◽  
C Couder-Castañeda ◽  
M G Orozco-del-Castillo ◽  
...  

Nowadays, remote sensing data taken from artificial satellites require high space com- munications bandwidths as well as high computational processing burdens due to the vertiginous development and specialisation of on-board payloads specifically designed for remote sensing purposes. Nevertheless, these factors become a severe problem when con- sidering nanosatellites, particularly those based in the CubeSat standard, due to the strong limitations that it imposes in volume, power and mass. Thus, the applications of remote sensing in this class of satellites, widely sought due to their affordable cost and easiness of construction and deployment, are very restricted due to their very limited on-board computer power, notwithstanding their Low Earth Orbits (LEO) which make them ideal for Earth’s remote sensing. In this work we present the feasibility of the integration of an NVIDIA GPU of low mass and power as the on-board computer for 1-3U CubeSats. From the remote sensing point of view, we present nine processing-intensive algorithms very commonly used for the processing of remote sensing data which can be executed on-board on this platform. In this sense, we present the performance of these algorithms on the proposed on-board computer with respect with a typical on-board computer for CubeSats (ARM Cortex-A57 MP Core Processor), showing that they have acceleration factors of average of 14.04× ∼14.72× in average. This study sets the precedent to perform satellite on-board high performance computing so to widen the remote sensing capabilities of CubeSats.


Sign in / Sign up

Export Citation Format

Share Document