scholarly journals The “Chimera”: An Off-The-Shelf CPU/GPGPU/FPGA Hybrid Computing Platform

2012 ◽  
Vol 2012 ◽  
pp. 1-10 ◽  
Author(s):  
Ra Inta ◽  
David J. Bowman ◽  
Susan M. Scott

The nature of modern astronomy means that a number of interesting problems exhibit a substantial computational bound and this situation is gradually worsening. Scientists, increasingly fighting for valuable resources on conventional high-performance computing (HPC) facilities—often with a limited customizable user environment—are increasingly looking to hardware acceleration solutions. We describe here a heterogeneous CPU/GPGPU/FPGA desktop computing system (the “Chimera”), built with commercial-off-the-shelf components. We show that this platform may be a viable alternative solution to many common computationally bound problems found in astronomy, however, not without significant challenges. The most significant bottleneck in pipelines involving real data is most likely to be the interconnect (in this case the PCI Express bus residing on the CPU motherboard). Finally, we speculate on the merits of our Chimera system on the entire landscape of parallel computing, through the analysis of representative problems from UC Berkeley’s “Thirteen Dwarves.”

2014 ◽  
Vol 556-562 ◽  
pp. 4746-4749
Author(s):  
Bin Chu ◽  
Da Lin Jiang ◽  
Bo Cheng

This paper concerns about Large-scale mosaic for remote sensed images. Base on High Performance Computing system, we offer a method to decompose the problem and integrate them with logical and physical relationship. The mosaic of Large-scale remote sensed images has been improved both at performance and effectiveness.


2014 ◽  
Vol 989-994 ◽  
pp. 1810-1813
Author(s):  
Yu Sun ◽  
Jun Liu

It is an important research issue to ensure the computation correctness for parallel application and enhance the using rate of dynamic computing resource in distributed computing system. Based on the previous high performance distributing computing system, a fault-tolerant and task scheduler was developed, which combined the breathe mechanism, fault-discover mechanism and subtask reschedule mechanism. Experiments show that the fault-tolerant and task-scheduler has good performance and ensures the computation correctness even if when some computing resources fail.


2019 ◽  
Author(s):  
Weiming Hu ◽  
Guido Cervone ◽  
Vivek Balasubramanian ◽  
Matteo Turilli ◽  
Shantenu Jha

Author(s):  
Indar Sugiarto ◽  
Doddy Prayogo ◽  
Henry Palit ◽  
Felix Pasila ◽  
Resmana Lim ◽  
...  

This paper describes a prototype of a computing platform dedicated to artificial intelligence explorations. The platform, dubbed as PakCarik, is essentially a high throughput computing platform with GPU (graphics processing units) acceleration. PakCarik is an Indonesian acronym for Platform Komputasi Cerdas Ramah Industri Kreatif, which can be translated as “Creative Industry friendly Intelligence Computing Platform”. This platform aims to provide complete development and production environment for AI-based projects, especially to those that rely on machine learning and multiobjective optimization paradigms. The method for constructing PakCarik was based on a computer hardware assembling technique that uses commercial off-the-shelf hardware and was tested on several AI-related application scenarios. The testing methods in this experiment include: high-performance lapack (HPL) benchmarking, message passing interface (MPI) benchmarking, and TensorFlow (TF) benchmarking. From the experiment, the authors can observe that PakCarik's performance is quite similar to the commonly used cloud computing services such as Google Compute Engine and Amazon EC2, even though falls a bit behind the dedicated AI platform such as Nvidia DGX-1 used in the benchmarking experiment. Its maximum computing performance was measured at 326 Gflops. The authors conclude that PakCarik is ready to be deployed in real-world applications and it can be made even more powerful by adding more GPU cards in it.


2017 ◽  
Vol 33 (2) ◽  
pp. 119-130
Author(s):  
Vinh Van Le ◽  
Hoai Van Tran ◽  
Hieu Ngoc Duong ◽  
Giang Xuan Bui ◽  
Lang Van Tran

Metagenomics is a powerful approach to study environment samples which do not require the isolation and cultivation of individual organisms. One of the essential tasks in a metagenomic project is to identify the origin of reads, referred to as taxonomic assignment. Due to the fact that each metagenomic project has to analyze large-scale datasets, the metatenomic assignment is very much computation intensive. This study proposes a parallel algorithm for the taxonomic assignment problem, called SeMetaPL, which aims to deal with the computational challenge. The proposed algorithm is evaluated with both simulated and real datasets on a high performance computing system. Experimental results demonstrate that the algorithm is able to achieve good performance and utilize resources of the system efficiently. The software implementing the algorithm and all test datasets can be downloaded at http://it.hcmute.edu.vn/bioinfo/metapro/SeMetaPL.html.


Author(s):  
Vadim Kondrashev ◽  
Sergey Denisov

The paper discusses methods and algorithms for the provision of high-performance computing resources in multicomputer systems in a shared mode for fundamental and applied research in the field of materials science. Approaches are proposed for the application of applied integrated software environments (frameworks) designed to solve material science problems using virtualization and parallel computing technologies.


Sign in / Sign up

Export Citation Format

Share Document