scholarly journals Workflow optimization of performance and quality of service for bioinformatics application in high performance computing

2016 ◽  
Vol 15 ◽  
pp. 3-10 ◽  
Author(s):  
Rashid Al-Ali ◽  
Nagarajan Kathiresan ◽  
Mohammed El Anbari ◽  
Eric R. Schendel ◽  
Tariq Abu Zaid
2017 ◽  
Vol 2 (1) ◽  
pp. 7
Author(s):  
Izzatul Ummah

In this research, we build a grid computing infrastructure by utilizing existing cluster in Telkom University as back-end resources. We used middleware Globus Toolkit 6.0 and Condor 8.4.2 in developing the grid system. We tested the performance of our grid system using parallel matrix multiplication. The result showed that our grid system has achieved good performance. With the implementation of this grid system, we believe that access to high performance computing resources will become easier and the Quality of Service will also be improved.


Land ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 301
Author(s):  
Kimberly R. Hall ◽  
Ranjan Anantharaman ◽  
Vincent A. Landau ◽  
Melissa Clark ◽  
Brett G. Dickson ◽  
...  

The conservation field is experiencing a rapid increase in the amount, variety, and quality of spatial data that can help us understand species movement and landscape connectivity patterns. As interest grows in more dynamic representations of movement potential, modelers are often limited by the capacity of their analytic tools to handle these datasets. Technology developments in software and high-performance computing are rapidly emerging in many fields, but uptake within conservation may lag, as our tools or our choice of computing language can constrain our ability to keep pace. We recently updated Circuitscape, a widely used connectivity analysis tool developed by Brad McRae and Viral Shah, by implementing it in Julia, a high-performance computing language. In this initial re-code (Circuitscape 5.0) and later updates, we improved computational efficiency and parallelism, achieving major speed improvements, and enabling assessments across larger extents or with higher resolution data. Here, we reflect on the benefits to conservation of strengthening collaborations with computer scientists, and extract examples from a collection of 572 Circuitscape applications to illustrate how through a decade of repeated investment in the software, applications have been many, varied, and increasingly dynamic. Beyond empowering continued innovations in dynamic connectivity, we expect that faster run times will play an important role in facilitating co-production of connectivity assessments with stakeholders, increasing the likelihood that connectivity science will be incorporated in land use decisions.


Author(s):  
Konstantin Volovich

The article is devoted to methods of calculation and evaluation of the effectiveness of the functioning of hybrid computing systems. The article proposes a method of calculating the value of the workload using peak values of the cluster performance. The results and the quality of the functioning of cloud scientific services of high-performance computing using the roofline model are analyzed.


MRS Bulletin ◽  
1997 ◽  
Vol 22 (10) ◽  
pp. 5-6
Author(s):  
Horst D. Simon

Recent events in the high-performance computing industry have concerned scientists and the general public regarding a crisis or a lack of leadership in the field. That concern is understandable considering the industry's history from 1993 to 1996. Cray Research, the historic leader in supercomputing technology, was unable to survive financially as an independent company and was acquired by Silicon Graphics. Two ambitious new companies that introduced new technologies in the late 1980s and early 1990s—Thinking Machines and Kendall Square Research—were commercial failures and went out of business. And Intel, which introduced its Paragon supercomputer in 1994, discontinued production only two years later.During the same time frame, scientists who had finished the laborious task of writing scientific codes to run on vector parallel supercomputers learned that those codes would have to be rewritten if they were to run on the next-generation, highly parallel architecture. Scientists who are not yet involved in high-performance computing are understandably hesitant about committing their time and energy to such an apparently unstable enterprise.However, beneath the commercial chaos of the last several years, a technological revolution has been occurring. The good news is that the revolution is over, leading to five to ten years of predictable stability, steady improvements in system performance, and increased productivity for scientific applications. It is time for scientists who were sitting on the fence to jump in and reap the benefits of the new technology.


Sign in / Sign up

Export Citation Format

Share Document