linux cluster
Recently Published Documents


TOTAL DOCUMENTS

90
(FIVE YEARS 2)

H-INDEX

9
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Jiecheng Zhang ◽  
George Moridis ◽  
Thomas Blasingame

Abstract The Reservoir GeoMechanics Simulator (RGMS), a geomechanics simulator based on the finite element method and parallelized using the Message Passing Interface (MPI), is developed in this work to model the stresses and deformations in subsurface systems. RGMS can be used stand-alone, or coupled with flow and transport models. pT+H V1.5, a parallel MPI-based version of the serial T+H V1.5 code that describes mass and heat flow in hydrate-bearing porous media, is also developed. Using the fixed-stress split iterative scheme, RGMS is coupled with the pT+H V1.5 to investigate the geomechanical responses associated with gas production from hydrate accumulations. The code development and testing process involve evaluation of the parallelization and of the coupling method, as well as verification and validation of the results. The parallel performance of the codes is tested on the Ada Linux cluster of the Texas A&M High Performance Research Computing using up to 512 processors, and on a Mac Pro computer with 12 processors. The investigated problems are: Group 1: Geomechanical problems solved by RGMS in 2D Cartesian and cylindrical domains and a 3D problem, involving 4x106 and 3.375 x106 elements, respectively; Group 2: Realistic problems of gas production from hydrates using pT+H V1.5 in 2D and 3D systems with 2.45x105 and 3.6 x106 elements, respectively; Group 3: The 3D problem in Group 2 solved with the coupled RGMS-pT+H V1.5 simulator, fully accounting for geomechanics. Two domain partitioning options are investigated on the Ada Linux cluster and the Mac Pro, and the code parallel performance is monitored. On the Ada Linux cluster using 512 processors, the simulation speedups (a) of RGMS are 218.89, 188.13, and 284.70 in the Group 1 problems, (b) of pT+H V1.5 are 174.25 and 341.67 in the Group 2 cases, and (c) of the coupled simulators is 331.80 in Group 3. The results produced in this work show the necessity of using full geomechanics simulators in marine hydrate-related studies because of the associated pronounced geomechanical effects on production and displacements and (b) the effectiveness of the parallel simulators developed in this study, which can be the only realistic option in these complex simulations of large multi-dimensional domains.



2021 ◽  
Vol 17 (8) ◽  
pp. e1009207
Author(s):  
Jamie J. Alnasir
Keyword(s):  


2020 ◽  
Vol 245 ◽  
pp. 09016
Author(s):  
Maria Alandes Pradillo ◽  
Nils Høimyr ◽  
Pablo Llopis Sanmillan ◽  
Markus Tapani Jylhänkangas

The CERN IT department has been maintaining different High Performance Computing (HPC) services over the past five years. While the bulk of computing facilities at CERN are running under Linux, a Windows cluster was dedicated for engineering simulations and analysis related to accelerator technology development. The Windows cluster consisted of machines with powerful CPUs, big memory, and a low-latency interconnect. The Linux cluster resources are accessible through HTCondor, and are used for general purpose parallel but single-node type jobs, providing computing power to the CERN experiments and departments for tasks such as physics event reconstruction, data analysis, and simulation. For HPC workloads that require multi-node parallel environments for Message Passing Interface (MPI) based programs, there is another Linux-based HPC service that is comprised of several clusters running under the Slurm batch system, and consist of powerful hardware with low-latency interconnects. In 2018, it was decided to consolidate compute intensive jobs in Linux to make a better use of the existing resources. Moreover, this was also in line with CERN IT strategy to reduce its dependencies on Microsoft products. This paper focuses on the migration of Ansys [1], COMSOL [2] and CST [3] users from Windows HPC to Linux clusters. Ansys, COMSOL and CST are three engineering applications used at CERN for different domains, like multiphysics simulations and electromagnetic field problems. Users of these applications are in different departments, with different needs and levels of expertise. In most cases, the users have no prior knowledge of Linux. The paper will present the technical strategy to allow the engineering users to submit their simulations to the appropriate Linux cluster, depending on their simulation requirements. We also describe the technical solution to integrate their Windows workstations in order from them to be able to submit to Linux clusters. Finally, we discuss the challenges and lessons learnt during the migration.



Water ◽  
2018 ◽  
Vol 10 (12) ◽  
pp. 1841 ◽  
Author(s):  
Suli Pan ◽  
Li Liu ◽  
Zhixu Bai ◽  
Yue-Ping Xu

This study presents an approach that integrates remote sensing evapotranspiration into multi-objective calibration (i.e., runoff and evapotranspiration) of a fully distributed hydrological model, namely a distributed hydrology–soil–vegetation model (DHSVM). Because of the lack of a calibration module in the DHSVM, a multi-objective calibration module using ε-dominance non-dominated sorted genetic algorithm II (ε-NSGAII) and based on parallel computing of a Linux cluster for the DHSVM (εP-DHSVM) is developed. The module with DHSVM is applied to a humid river basin located in the mid-west of Zhejiang Province, east China. The results show that runoff is simulated well in single objective calibration, whereas evapotranspiration is not. By considering more variables in multi-objective calibration, DHSVM provides more reasonable simulation for both runoff (NS: 0.74% and PBIAS: 10.5%) and evapotranspiration (NS: 0.76% and PBIAS: 8.6%) and great reduction of equifinality, which illustrates the effect of remote sensing evapotranspiration integration in the calibration of hydrological models.



2017 ◽  
Vol 29 (3) ◽  
Author(s):  
Mabule Samuel Mabakane ◽  
Daniel Mojalefa Moeketsi ◽  
Anton Lopis

This paper presents a case study on the scalability of several versions of the molecular dynamics code (DL_POLY) performed on South Africa‘s Centre for High Performance Computing e1350 IBM Linux cluster, Sun system and Lengau supercomputers. Within this study different problem sizes were designed and the same chosen systems were employed in order to test the performance of DL_POLY using weak and strong scalability. It was found that the speed-up results for the small systems were better than large systems on both Ethernet and Infiniband network. However, simulations of large systems in DL_POLY performed well using Infiniband network on Lengau cluster as compared to e1350 and Sun supercomputer.



2017 ◽  
Vol 5 (2) ◽  
pp. 55-69
Author(s):  
Daisuke Fujishima ◽  
Tomio Kamada

The field of parallel computing has experienced an increase in the number of computing nodes, allowing broader applications, including computations that have irregular features. Some parallel programming languages handle object data structures and offer marshaling/unmarshaling mechanisms to transpose them. To manage data elements across computing nodes, some research on distributed collections has been conducted. This study proposes a distributed collection library that can handle multiple collections of object elements and change their distributions while maintaining associativity between their elements. This library is implemented on an object-oriented parallel programming language, X10. The authors assume pairs of associative collections such as vehicles and streets in a traffic simulation. When many vehicles are concentrated on streets assigned to certain computing nodes, some of these streets should be moved to other nodes. The authors' library assists the programmer in easily distributing the associative collections over the computing nodes and collectively relocating elements while maintaining the data sharing relationship among associative elements. The programmer can describe the associativity between objects by using both declarative and procedural methods. They show a preliminary performance evaluation of their library on a Linux cluster and the K computer.



2014 ◽  
Vol 70 (a1) ◽  
pp. C781-C781
Author(s):  
Ville Uski ◽  
Charles Ballard ◽  
Ronan Keegan ◽  
Eugene Krissinel ◽  
Andrey Lebedev ◽  
...  

The CCP4 software suite [1] provides a comprehensive set of tools for use in the macromolecule structure solution process by X-ray crystallography. Traditionally, these tools have been run through the graphical interface or the command line on each user's personal workstation. Recently, some of the tools, including the molecular replacement pipelines Balbes [2] and MrBUMP [3] have been provided as web services in the Research Complex at Harwell. These pipelines can be especially useful in cases where there is low sequence identity between the target-structure sequence and that of its set of possible homologues. These services can be accessed through a web client, allowing one to submit molecular replacement jobs to our Linux cluster and view the results from these jobs. The molecular replacement pipelines are ideal candidates for web services, as they require installation and maintenance of large databases and benefit from parallel computing resources, provided by the cluster. Further plans for web services will be discussed. With ever-increasing mobility of scientific setups and the ubiquity of ultra-portable devices, there is a demand for a consistent framework of remote crystallographic computations and data maintenance. This framework is planned to include an interface for synchronising data with the facilities of Diamond Light Source, as well as with local CCP4 GUI-2 setups.



2012 ◽  
Vol 87 (12) ◽  
pp. 1912-1916 ◽  
Author(s):  
Q.P. Yuan ◽  
B.J. Xiao ◽  
R.R. Zhang ◽  
M.L. Walker ◽  
B.G. Penaflor ◽  
...  


Sign in / Sign up

Export Citation Format

Share Document