scholarly journals High-performance computing and Grid applications in high-throughput protein crystallography

2004 ◽  
Vol 60 (a1) ◽  
pp. s243-s243
Author(s):  
R. M. Keegan ◽  
D. Meredith ◽  
G. Winter ◽  
M. D. Winn
Author(s):  
Kamer Kaya ◽  
Ayat Hatem ◽  
Hatice Gülçin Özer ◽  
Kun Huang ◽  
Ümit V. Çatalyürek

Smart Grids ◽  
2017 ◽  
pp. 511-532
Author(s):  
Yousu Chen ◽  
Huang Zhenyu (Henry) ◽  
Yousu Chen ◽  
Zhenyu (Henry) Huang

2020 ◽  
Vol 245 ◽  
pp. 09011
Author(s):  
Michael Hildreth ◽  
Kenyi Paolo Hurtado Anampa ◽  
Cody Kankel ◽  
Scott Hampton ◽  
Paul Brenner ◽  
...  

The NSF-funded Scalable CyberInfrastructure for Artificial Intelligence and Likelihood Free Inference (SCAILFIN) project aims to develop and deploy artificial intelligence (AI) and likelihood-free inference (LFI) techniques and software using scalable cyberinfrastructure (CI) built on top of existing CI elements. Specifically, the project has extended the CERN-based REANA framework, a cloud-based data analysis platform deployed on top of Kubernetes clusters that was originally designed to enable analysis reusability and reproducibility. REANA is capable of orchestrating extremely complicated multi-step workflows, and uses Kubernetes clusters both for scheduling and distributing container-based workloads across a cluster of available machines, as well as instantiating and monitoring the concrete workloads themselves. This work describes the challenges and development efforts involved in extending REANA and the components that were developed in order to enable large scale deployment on High Performance Computing (HPC) resources. Using the Virtual Clusters for Community Computation (VC3) infrastructure as a starting point, we implemented REANA to work with a number of differing workload managers, including both high performance and high throughput, while simultaneously removing REANA’s dependence on Kubernetes support at the workers level.


2019 ◽  
Vol 214 ◽  
pp. 03024
Author(s):  
Vladimir Brik ◽  
David Schultz ◽  
Gonzalo Merino

Here we report IceCube’s first experiences of running GPU simulations on the Titan supercomputer. This undertaking was non-trivial because Titan is designed for High Performance Computing (HPC) workloads, whereas IceCube’s workloads fall under the High Throughput Computing (HTC) category. In particular: (i) Titan’s design, policies, and tools are geared heavily toward large MPI applications, while IceCube’s workloads consist of large numbers of relatively small independent jobs, (ii) Titan compute nodes run Cray Linux, which is not directly compatible with IceCube software, and (iii) Titan compute nodes cannot access outside networks, making it impossible to access IceCube’s CVMFS repositories and workload management systems. This report examines our experience of packaging our application in Singularity containers and using HTCondor as the second-level scheduler on the Titan supercomputer.


Sign in / Sign up

Export Citation Format

Share Document