computer cluster
Recently Published Documents


TOTAL DOCUMENTS

126
(FIVE YEARS 21)

H-INDEX

7
(FIVE YEARS 2)

2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Rhys A. Farrer

Abstract Background Identifying haplotypes is central to sequence analysis in diploid or polyploid genomes. Despite this, there remains a lack of research and tools designed for physical phasing and its downstream analysis. Results HaplotypeTools is a new toolset to phase variant sites using VCF and BAM files and to analyse phased VCFs. Phasing is achieved via the identification of reads overlapping ≥ 2 heterozygous positions and then extended by additional reads, a process that can be parallelized across a computer cluster. HaplotypeTools includes various utility scripts for downstream analysis including crossover detection and phylogenetic placement of haplotypes to other lineages or species. HaplotypeTools was assessed for accuracy against WhatsHap using simulated short and long reads, demonstrating higher accuracy, albeit with reduced haplotype length. HaplotypeTools was also tested on real Illumina data to determine the ancestry of hybrid fungal isolate Batrachochytrium dendrobatidis (Bd) SA-EC3, finding 80% of haplotypes across the genome phylogenetically cluster with parental lineages BdGPL (39%) and BdCAPE (41%), indicating those are the parental lineages. Finally, ~ 99% of phasing was conserved between overlapping phase groups between SA-EC3 and either parental lineage, indicating mitotic gene conversion/parasexuality as the mechanism of recombination for this hybrid isolate. HaplotypeTools is open source and freely available from https://github.com/rhysf/HaplotypeTools under the MIT License. Conclusions HaplotypeTools is a powerful resource for analyzing hybrid or recombinant diploid or polyploid genomes and identifying parental ancestry for sub-genomic regions.


2021 ◽  
Author(s):  
Depeng Zuo ◽  
Guangyuan Kan ◽  
Hongquan Sun ◽  
Hongbin Zhang ◽  
Ke Liang

Abstract. The Generalized Likelihood Uncertainty Estimation (GLUE) method has been thrived for decades, huge number of applications in the field of hydrological model have proved its effectiveness in uncertainty and parameter estimation. However, for many years, the poor computational efficiency of GLUE hampers its further applications. A feasible way to solve this problem is the integration of modern CPU-GPU hybrid high performance computer cluster technology to accelerate the traditional GLUE method. In this study, we developed a CPU-GPU hybrid computer cluster-based highly parallel large-scale GLUE method to improve its computational efficiency. The Intel Xeon multi-core CPU and NVIDIA Tesla many-core GPU were adopted in this study. The source code was developed by using the MPICH2, C++ with OpenMP 2.0, and CUDA 6.5. The parallel GLUE method was tested by a widely-used hydrological model (the Xinanjiang model) to conduct performance and scalability investigation. Comparison results indicated that the parallel GLUE method outperformed the traditional serial method and have good application prospect on super computer clusters such as the ORNL Summit and Sierra of the TOP500 super computers around the world.


2021 ◽  
Vol 2090 (1) ◽  
pp. 012100
Author(s):  
P Weisenpacher ◽  
J Glasa ◽  
L Valasek ◽  
T Kubisova

Abstract This paper investigates smoke movement and its stratification in a lay-by of a 900 m long road tunnel by computer simulation using Fire Dynamics Simulator. The lay-by is located upstream of the fire in its vicinity. The influence of lay-by geometry on smoke spread is evaluated by comparison with a fictional tunnel without lay-by. Several fire scenarios with various tunnel slopes and heat release rates of fire in the tunnels without and with the lay-by are considered. The most significant breaking of smoke stratification and decrease of visibility in the area of the lay-by can be observed in the case of zero slope tunnel for more intensive fires with significant length of backlayering. Several other features of smoke spread in the lay-by are analysed as well. The parallel calculations were performed on a high-performance computer cluster.


Author(s):  
Stefano Colafranceschi ◽  
Emanuele De Biase

The computational capabilities of commercial CPUs and GPUs reached a plateau but soft-ware applications are usually memory-intense tasks and they commonly need/utilize most recent hardware developments. Computer clusters are an expensive solution, although reliable and versatile, with a limited market share for small colleges. Small schools would typically rely on cloud-based systems because they are more afford-able (less expensive), manageable (no need to worry about the maintenance), and easier to implement (the burden is shifted into the datacenter). Here we provide arguments in favor of an on-campus hardware solution, which, while providing benefits for students, does not present the financial burden associated with larger and more powerful computer clus-ters. We think that instructors of engineering/computer science faculties might find this a viable and workable solution to improve the computing environment of their school without incurring the high cost of a ready-made solution. At the basis of this proposal is the acquisition of inexpensive refurbished hardware and of a type1 VMware hypervisor with a free licensing, as well as of a custom-made web plat-form to control the deployed hypervisors. VMware is a global leader in cloud infrastruc-ture and software-based solutions. In particular, the adoption of a customized "Elastic Sky X integrated" as hypervisor together with Virtual Operating Systems installed in the very same datastore, would constitute an interesting and working proof-of-concept achieving a computer cluster at a fraction of the market cost.


2020 ◽  
Vol 2020 ◽  
pp. 1-10 ◽  
Author(s):  
Ruiying Li ◽  
Xufeng Zhang

Preventive maintenance (PM), which is performed periodically on the system to lessen its failing probability, can effectively decrease the loss caused by the system breakdown or the performance degradation. The optimal PM interval has been well studied for both binary-state systems (BSSs) and discrete multistate systems (MSSs). However, in reality, the performance of many systems can change continuously, ranging from complete failure to perfect functioning. Considering such characteristics of systems, two types of performance-based measures, performance availability and probabilistic resilience, are addressed to quantify the system’s behaviour for continuous MSS. A Monte Carlo-based method is given to analyse the performance change process of the system, and an optimization framework is proposed to find the optimal PM interval with the considerations of per-unit-time cost, system breakdown rate, performance availability, and probabilistic resilience. A computer cluster is used as an example to illustrate the effectiveness of our proposed method.


Sign in / Sign up

Export Citation Format

Share Document