high throughput computing
Recently Published Documents


TOTAL DOCUMENTS

122
(FIVE YEARS 26)

H-INDEX

17
(FIVE YEARS 3)

2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Oliver Freyermuth ◽  
Peter Wienemann ◽  
Philip Bechtle ◽  
Klaus Desch

AbstractHigh performance and high throughput computing (HPC/HTC) is challenged by ever increasing demands on the software stacks and more and more diverging requirements by different research communities. This led to a reassessment of the operational concept of HPC/HTC clusters at the Physikalisches Institut at the University of Bonn. As a result, the present HPC/HTC cluster (named BAF2) introduced various conceptual changes compared to conventional clusters. All jobs are now run in containers and a container-aware resource management system is used which allowed us to switch to a model without login/head nodes. Furthermore, a modern, feature-rich storage system with powerful interfaces has been deployed. We describe the design considerations, the implemented functionality and the operational experience gained with this new-generation setup which turned out to be very successful and well-accepted by its users.


2021 ◽  
Vol 251 ◽  
pp. 02036
Author(s):  
Dave Dykstra ◽  
Mine Altunay ◽  
Jeny Teheran

The WLCG is modernizing its security infrastructure, replacing X.509 client authentication with the newer industry standard of JSON Web Tokens (JWTs) obtained through the Open ID Connect (OIDC) protocol. There is a wide variety of software available using the standards, but most of it is for Web browser-based applications and doesn’t adapt well to the command line-based software used heavily in High Throughput Computing (HTC). OIDC command line client software did exist, but it did not meet our requirements for security and convenience. This paper discusses a command line solution we have made based on the popular existing secrets management software from Hashicorp called vault. We made a package called htvault-config to easily configure a vault service and another called htgettoken to be the vault client. In addition, we have integrated use of the tools into the HTCondor workload management system, although they also work well independent of HTCondor. All of the software is open source, under active development, and ready for use.


2020 ◽  
Vol 05 (02) ◽  
pp. 2040001
Author(s):  
Zhuo Wang ◽  
Jianglin Wei ◽  
Jing Feng ◽  
Yingwu Wang ◽  
Yumin Lai ◽  
...  

In this paper, we initially construct a master control platform of Yunnan rare and precious metal materials (RAPMMs) genetic engineering framework based on the idea of material gene engineering, which includes a master material data management sub-platform, a high-throughput computing sub-platform and a high-throughput material preparation and characterization sub-platform, and connects the data information in the sub-platform. The database covers the standard performance data of RAPMMs from all over the world, and the initial amount of data is 2.2 million items, which provides data resources for deep learning of material big data in the future. In addition, we have researched the fusion and convergence technology of multi-source heterogeneous RAPMMs data, which support the management, processing and storage of all kinds of materials professional data. In order to reduce the researchers’ need for precise retrieval of large amounts of data, the deep search technology based on rare metal data is investigated to achieve precise search in a large data environment and provide professional visualization and statistical analysis techniques. Through the technical research on the interface of the open high-throughput computing platform, the online generation and submission of the operations of the first-principles computing software VASP are realized, and the automatic data identification and import of the calculation result files are achieved. Finally, it realizes the functions of controlling and controlling the high-throughput calculation platform, high-throughput preparation and characterization platform, and data platform, with the functions of resource coordination and scheduling, information sharing, public database management, and relevant information retrieval and query, etc., and establishes a common database structure for rare and precious metals, which lays a solid foundation for the future development of material genetic engineering.


Author(s):  
Mikhail Gasanov ◽  
Anna Petrovskaia ◽  
Artyom Nikitin ◽  
Sergey Matveev ◽  
Polina Tregubova ◽  
...  

2020 ◽  
Vol 245 ◽  
pp. 05038
Author(s):  
Lirim Osmani ◽  
Tomas Lindén

The ARM platform extends from the mobile phone area to development board computers and servers. It could be that in the future the importance of the ARM platform will increase for High Performance Computing/High Throughput Computing (HPC/HTC) if new more powerful (server) boards are released. For this reason Compact Muon Solenoid Software (CMSSW) has previously been ported to ARM in earlier work. The CMSSW is deployed using the CERN Virtual Machine File System (CVMFS) and the jobs are run inside Singularity containers. Some ARM AArch64 CMSSW releases are available in CVMFS for testing and development. In this work CVMFS and Singularity have been compiled and installed on an ARM cluster and the AArch64 CMSSW releases in CVMFS have been used. We report on our experiences with this ARM cluster for CMSSW jobs. Commodity hardware designed around the 64-bit architecture has been the basis of current virtualization trends with the advantage to emulate diverse environments for a wide range of computational scenarios. However, in parallel the mobile revolution have given a rise to ARM SoCs with primary focus on power efficiency. While still in the experimental phase, the power efficiency and 64-bit heterogeneous computing already point to an alternative option for traditional x86_64 CPUs servers for datacenters.


Sign in / Sign up

Export Citation Format

Share Document