scholarly journals Performance Analysis of Parallelized Bioinformatics Applications

2018 ◽  
Vol 7 (2) ◽  
pp. 70-74
Author(s):  
Dhruv Chander Pant ◽  
O. P. Gupta

The main challenges bioinformatics applications facing today are to manage, analyze and process a huge volume of genome data. This type of analysis and processing is very difficult using general purpose computer systems. So the need of distributed computing, cloud computing and high performance computing in bioinformatics applications arises. Now distributed computers, cloud computers and multi-core processors are available at very low cost to deal with bulk amount of genome data. Along with these technological developments in distributed computing, many efforts are being done by the scientists and bioinformaticians to parallelize and implement the algorithms to take the maximum advantage of the additional computational power. In this paper a few bioinformatics algorithms have been discussed. The parallelized implementations of these algorithms have been explained. The performance of these parallelized algorithms has been also analyzed. It has been also observed that in parallel implementations of the various bioinformatics algorithms, impact of communication subsystems with respect to the job sizes should also be analyzed.

2021 ◽  
Vol 4 (3) ◽  
pp. 40
Author(s):  
Abdul Majeed

During the ongoing pandemic of the novel coronavirus disease 2019 (COVID-19), latest technologies such as artificial intelligence (AI), blockchain, learning paradigms (machine, deep, smart, few short, extreme learning, etc.), high-performance computing (HPC), Internet of Medical Things (IoMT), and Industry 4.0 have played a vital role. These technologies helped to contain the disease’s spread by predicting contaminated people/places, as well as forecasting future trends. In this article, we provide insights into the applications of machine learning (ML) and high-performance computing (HPC) in the era of COVID-19. We discuss the person-specific data that are being collected to lower the COVID-19 spread and highlight the remarkable opportunities it provides for knowledge extraction leveraging low-cost ML and HPC techniques. We demonstrate the role of ML and HPC in the context of the COVID-19 era with the successful implementation or proposition in three contexts: (i) ML and HPC use in the data life cycle, (ii) ML and HPC use in analytics on COVID-19 data, and (iii) the general-purpose applications of both techniques in COVID-19’s arena. In addition, we discuss the privacy and security issues and architecture of the prototype system to demonstrate the proposed research. Finally, we discuss the challenges of the available data and highlight the issues that hinder the applicability of ML and HPC solutions on it.


1985 ◽  
Vol 63 ◽  
Author(s):  
A. F. Bakker

ABSTRACTThe need for computational power in the modeling of physical systems is rapidly increasing. Realistic simulations of materials often require complex interactions and large numbers of particles. For most scientists, full-time access to supercomputers is not possible, and even this might not be sufficient to solve their problems. As most of the calculations involved are straightforward and repetitive in nature, a possible solution is to design and build a processor for a specific application with a low cost/performance ratio. This approach is to be contrasted with the use of a general purpose computer, which is designed to treat a large class of problems and includes many expensive features (e.g. software) that are not utilized in the simulations. The architecture of a special purpose computer can be tailored to the problem; e.g., parallel and pipelined operations can be incorporated to obtain efficient computational throughput, and memory organization and instruction sets can be optimized for this purpose. A few of such algorithm oriented processors have been built in the last decade and have been utilized for certain specific jobs; for example: molecular dynamics simulations of systems of Lennard-Jones particles and Monte Carlo calculations of Ising models. An overview of some existing algorithm oriented processors and expectations for the future will be presented.


Graphics Accelerators are increasingly used for general purpose high performance computing applications as they provide a low cost solution to high performance computing requirements. Intel also came out with a performance accelerator that offers a similar solution. However, the existing application software needs to be restructured to suit to the accelerator paradigm with a suitable software architecture pattern. In the present work, master-slave architecture is employed to convert CFD grid free Euler solvers in CUDA for GPGPU computing. The performance obtained using master-slave architecture for GPGPU computing is compared with that of sequential computing results.


Author(s):  
D. Z. Wang ◽  
D. L. Taylor

Abstract This paper describes an analytical approach for calculating the damped critical speeds of multi-degree-of-freedom rotor-bearing systems. It is shown that to calculate the critical speeds is equivalent to finding the roots of a proposed matrix algebraic equation. The technique employes a Newton-Raphson scheme and the derivatives of eigenvalues. The system left eigenvectors are used to simplify the calculations. Based on this approach, a general-purpose computer program was developed with a finite element model of rotor-bearing systems. The program automatically generates system equations and finds the critical speeds. The program is applied to analyze a turbomachine supported by two cylindrical oil-film Journal bearings. The results are compared with reported data and the agreements are very good.


Sign in / Sign up

Export Citation Format

Share Document