scholarly journals The B Reader Program, Silicosis, and Physician Workload Management

2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Vrushab Gowda ◽  
Glen Cheng ◽  
Kenji Saito
1990 ◽  
Author(s):  
DEPARTMENT OF THE ARMY WASHINGTON DC

2021 ◽  
Vol 11 (3) ◽  
pp. 923
Author(s):  
Guohua Li ◽  
Joon Woo ◽  
Sang Boem Lim

The complexity of high-performance computing (HPC) workflows is an important issue in the provision of HPC cloud services in most national supercomputing centers. This complexity problem is especially critical because it affects HPC resource scalability, management efficiency, and convenience of use. To solve this problem, while exploiting the advantage of bare-metal-level high performance, container-based cloud solutions have been developed. However, various problems still exist, such as an isolated environment between HPC and the cloud, security issues, and workload management issues. We propose an architecture that reduces this complexity by using Docker and Singularity, which are the container platforms most often used in the HPC cloud field. This HPC cloud architecture integrates both image management and job management, which are the two main elements of HPC cloud workflows. To evaluate the serviceability and performance of the proposed architecture, we developed and implemented a platform in an HPC cluster experiment. Experimental results indicated that the proposed HPC cloud architecture can reduce complexity to provide supercomputing resource scalability, high performance, user convenience, various HPC applications, and management efficiency.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Christian Ariza-Porras ◽  
Valentin Kuznetsov ◽  
Federica Legger

AbstractThe globally distributed computing infrastructure required to cope with the multi-petabyte datasets produced by the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) at CERN comprises several subsystems, such as workload management, data management, data transfers, and submission of users’ and centrally managed production requests. To guarantee the efficient operation of the whole infrastructure, CMS monitors all subsystems according to their performance and status. Moreover, we track key metrics to evaluate and study the system performance over time. The CMS monitoring architecture allows both real-time and historical monitoring of a variety of data sources. It relies on scalable and open source solutions tailored to satisfy the experiment’s monitoring needs. We present the monitoring data flow and software architecture for the CMS distributed computing applications. We discuss the challenges, components, current achievements, and future developments of the CMS monitoring infrastructure.


2008 ◽  
Author(s):  
Xiaoyun Zhu ◽  
Don Young ◽  
Brian J. Watson ◽  
Zhikui Wang ◽  
Jerry Rolia ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document