graphical processing
Recently Published Documents


TOTAL DOCUMENTS

338
(FIVE YEARS 86)

H-INDEX

29
(FIVE YEARS 5)

2022 ◽  
Vol 2161 (1) ◽  
pp. 012028
Author(s):  
Karamjeet Kaur ◽  
Sudeshna Chakraborty ◽  
Manoj Kumar Gupta

Abstract In bioinformatics, sequence alignment is very important task to compare and find similarity between biological sequences. Smith Waterman algorithm is most widely used for alignment process but it has quadratic time complexity. This algorithm is using sequential approach so if the no. of biological sequences is increasing then it takes too much time to align sequences. In this paper, parallel approach of Smith Waterman algorithm is proposed and implemented according to the architecture of graphic processing unit using CUDA in which features of GPU is combined with CPU in such a way that alignment process is three times faster than sequential implementation of Smith Waterman algorithm and helps in accelerating the performance of sequence alignment using GPU. This paper describes the parallel implementation of sequence alignment using GPU and this intra-task parallelization strategy reduces the execution time. The results show significant runtime savings on GPU.


IUCrJ ◽  
2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Oleg Mikhailovskii ◽  
Yi Xue ◽  
Nikolai R. Skrynnikov

A procedure has been developed for the refinement of crystallographic protein structures based on the biomolecular simulation program Amber. The procedure constructs a model representing a crystal unit cell, which generally contains multiple protein molecules and is fully hydrated with TIP3P water. Periodic boundary conditions are applied to the cell in order to emulate the crystal lattice. The refinement is conducted in the form of a specially designed short molecular-dynamics run controlled by the Amber ff14SB force field and the maximum-likelihood potential that encodes the structure-factor-based restraints. The new Amber-based refinement procedure has been tested on a set of 84 protein structures. In most cases, the new procedure led to appreciably lower R free values compared with those reported in the original PDB depositions or obtained by means of the industry-standard phenix.refine program. In particular, the new method has the edge in refining low-accuracy scrambled models. It has also been successful in refining a number of molecular-replacement models, including one with an r.m.s.d. of 2.15 Å. In addition, Amber-refined structures consistently show superior MolProbity scores. The new approach offers a highly realistic representation of protein–protein interactions in the crystal, as well as of protein–water interactions. It also offers a realistic representation of protein crystal dynamics (akin to ensemble-refinement schemes). Importantly, the method fully utilizes the information from the available diffraction data, while relying on state-of-the-art molecular-dynamics modeling to assist with those elements of the structure that do not diffract well (for example mobile loops or side chains). Finally, it should be noted that the protocol employs no tunable parameters, and the calculations can be conducted in a matter of several hours on desktop computers equipped with graphical processing units or using a designated web service.


Author(s):  
João Victor Daher Daibes ◽  
Milton Brown Do Coutto Filho ◽  
Julio Cesar Stacchini de Souza ◽  
Esteban Walter Gonzalez Clua ◽  
Rainer Zanghi

Author(s):  
I. Yu. Sesin ◽  
R. G. Bolbakov

General Purpose computing for Graphical Processing Units (GPGPU) technology is a powerful tool for offloading parallel data processing tasks to Graphical Processing Units (GPUs). This technology finds its use in variety of domains – from science and commerce to hobbyists. GPU-run general-purpose programs will inevitably run into performance issues stemming from code branch predication. Code predication is a GPU feature that makes both conditional branches execute, masking the results of incorrect branch. This leads to considerable performance losses for GPU programs that have large amounts of code hidden away behind conditional operators. This paper focuses on the analysis of existing approaches to improving software performance in the context of relieving the aforementioned performance loss. Description of said approaches is provided, along with their upsides, downsides and extents of their applicability and whether they address the outlined problem. Covered approaches include: optimizing compilers, JIT-compilation, branch predictor, speculative execution, adaptive optimization, run-time algorithm specialization, profile-guided optimization. It is shown that the aforementioned methods are mostly catered to CPU-specific issues and are generally not applicable, as far as branch-predication performance loss is concerned. Lastly, we outline the need for a separate performance improving approach, addressing specifics of branch predication and GPGPU workflow.


PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0260060
Author(s):  
Esteban Egea-Lopez ◽  
Jose Maria Molina-Garcia-Pardo ◽  
Martine Lienard ◽  
Pierre Degauque

Accurate characterization and simulation of electromagnetic propagation can be obtained by ray-tracing methods, which are based on a high frequency approximation to the Maxwell equations and describe the propagating field as a set of propagating rays, reflecting, diffracting and scattering over environment elements. However, this approach has been usually too computationally costly to be used in large and dynamic scenarios, but this situation is changing thanks the increasing availability of efficient ray-tracing libraries for graphical processing units. In this paper we present Opal, an electromagnetic propagation simulation tool implemented with ray-tracing on graphical processing units, which is part of the Veneris framework. Opal can be used as a stand-alone ray-tracing simulator, but its main strength lies in its integration with the game engine, which allows to generate customized 3D environments quickly and intuitively. We describe its most relevant features and provide implementation details, highlighting the different simulation types it supports and its extension possibilites. We provide application examples and validate the simulation on demanding scenarios, such as tunnels, where we compare the results with theoretical solutions and further discuss the tradeoffs between the simulation types and its performance.


Author(s):  
M. Udawalpola ◽  
A. Hasan ◽  
A. K. Liljedahl ◽  
A. Soliman ◽  
C. Witharana

Abstract. Regional extent and spatiotemporal dynamics of Arctic permafrost disturbances remain poorly quantified. High spatial resolution commercial satellite imagery enables transformational opportunities to observe, map, and document the micro-topographic transitions occurring in Arctic polygonal tundra at multiple spatial and temporal frequencies. The entire Arctic has been imaged at 0.5 m or finer resolution by commercial satellite sensors. The imagery is still largely underutilized, and value-added Arctic science products are rare. Knowledge discovery through artificial intelligence (AI), big imagery, high performance computing (HPC) resources is just starting to be realized in Arctic science. Large-scale deployment of petabyte-scale imagery resources requires sophisticated computational approaches to automated image interpretation coupled with efficient use of HPC resources. In addition to semantic complexities, multitude factors that are inherent to sub-meter resolution satellite imagery, such as file size, dimensions, spectral channels, overlaps, spatial references, and imaging conditions challenge the direct translation of AI-based approaches from computer vision applications. Memory limitations of Graphical Processing Units necessitates the partitioning of an input satellite imagery into manageable sub-arrays, followed by parallel predictions and post-processing to reconstruct the results corresponding to input image dimensions and spatial reference. We have developed a novel high performance image analysis framework –Mapping application for Arctic Permafrost Land Environment (MAPLE) that enables the integration of operational-scale GeoAI capabilities into Arctic science applications. We have designed the MAPLE workflow to become interoperable across HPC architectures while utilizing the optimal use of computing resources.


Sign in / Sign up

Export Citation Format

Share Document