Computations of Protein Hydrophobicity Profile as Virtual Experiment in Gridspace Virtual Laboratory

2012 ◽  
Vol 8 (4) ◽  
pp. 361-372
Author(s):  
Eryk Ciepiela ◽  
Tomasz Jadczyk ◽  
Daniel Harężlak ◽  
Marek Kasztelnik ◽  
Piotr Nowakowski ◽  
...  

ABSTRACT The terms like e-science, e-poster or e-health are nowadays commonly used. Special disciplines allowing fast development in these fields of science are commonly available. This paper presents e-paper [1] powered by the Collage Authoring Environment [2] e-publication system which is backed by the GridSpace2 [3] distributed computing platform. This e-publication in a form of WWW page, apart from the traditional textual and graphical content, embeds an on-line software tool for the analysis of the 3-D structure of protein based on the hydrophobicity distribution in protein body. The tool uses GridSpace2 platform in order to carry out computations on the PL-Grid [4] high-performance computing infrastructure. This work shows how this specific epublication was accomplished utilizing above mentioned already existing information technologies and e-infrastructure The tool employs the model called “fuzzy oil drop” that assumes the hydrophobicity distribution in proteins being in form of 3-D Gauss function. The protein of the hydrophobicity core structure accordant with the model with all hydrophobic residues buried in the central part of the protein body and hydrophilic residues exposed toward the water environment could be the protein very well soluble although representing no any form of activity. This is why the observed discrepancies between idealized and observed hydrophobicity distribution is presented in form of ΔH̃<sub>i</sub> profile revealing the localization of residues representing local hydrophobicity excess as well as local hydrophobicity deficiency. The distribution of these discrepancies appeared to be specific and function related. The e-publication makes available the tool to calculate the ΔH̃<sub>i</sub> profile of any protein under consideration. The interpretation of the final results is specific for particular protein.

2014 ◽  
Vol 687-691 ◽  
pp. 3733-3737
Author(s):  
Dan Wu ◽  
Ming Quan Zhou ◽  
Rong Fang Bie

Massive image processing technology requires high requirements of processor and memory, and it needs to adopt high performance of processor and the large capacity memory. While the single or single core processing and traditional memory can’t satisfy the need of image processing. This paper introduces the cloud computing function into the massive image processing system. Through the cloud computing function it expands the virtual space of the system, saves computer resources and improves the efficiency of image processing. The system processor uses multi-core DSP parallel processor, and develops visualization parameter setting window and output results using VC software settings. Through simulation calculation we get the image processing speed curve and the system image adaptive curve. It provides the technical reference for the design of large-scale image processing system.


2021 ◽  
Vol 14 (4) ◽  
pp. 1-28
Author(s):  
Tao Yang ◽  
Zhezhi He ◽  
Tengchuan Kou ◽  
Qingzheng Li ◽  
Qi Han ◽  
...  

Field-programmable Gate Array (FPGA) is a high-performance computing platform for Convolution Neural Networks (CNNs) inference. Winograd algorithm, weight pruning, and quantization are widely adopted to reduce the storage and arithmetic overhead of CNNs on FPGAs. Recent studies strive to prune the weights in the Winograd domain, however, resulting in irregular sparse patterns and leading to low parallelism and reduced utilization of resources. Besides, there are few works to discuss a suitable quantization scheme for Winograd. In this article, we propose a regular sparse pruning pattern in the Winograd-based CNN, namely, Sub-row-balanced Sparsity (SRBS) pattern, to overcome the challenge of the irregular sparse pattern. Then, we develop a two-step hardware co-optimization approach to improve the model accuracy using the SRBS pattern. Based on the pruned model, we implement a mixed precision quantization to further reduce the computational complexity of bit operations. Finally, we design an FPGA accelerator that takes both the advantage of the SRBS pattern to eliminate low-parallelism computation and the irregular memory accesses, as well as the mixed precision quantization to get a layer-wise bit width. Experimental results on VGG16/VGG-nagadomi with CIFAR-10 and ResNet-18/34/50 with ImageNet show up to 11.8×/8.67× and 8.17×/8.31×/10.6× speedup, 12.74×/9.19× and 8.75×/8.81×/11.1× energy efficiency improvement, respectively, compared with the state-of-the-art dense Winograd accelerator [20] with negligible loss of model accuracy. We also show that our design has 4.11× speedup compared with the state-of-the-art sparse Winograd accelerator [19] on VGG16.


2021 ◽  
Vol 16 ◽  
Author(s):  
Jinghao Peng ◽  
Jiajie Peng ◽  
Haiyin Piao ◽  
Zhang Luo ◽  
Kelin Xia ◽  
...  

Background: The open and accessible regions of the chromosome are more likely to be bound by transcription factors which are important for nuclear processes and biological functions. Studying the change of chromosome flexibility can help to discover and analyze disease markers and improve the efficiency of clinical diagnosis. Current methods for predicting chromosome flexibility based on Hi-C data include the flexibility-rigidity index (FRI) and the Gaussian network model (GNM), which have been proposed to characterize chromosome flexibility. However, these methods require the chromosome structure data based on 3D biological experiments, which is time-consuming and expensive. Objective: Generally, the folding and curling of the double helix sequence of DNA have a great impact on chromosome flexibility and function. Motivated by the success of genomic sequence analysis in biomolecular function analysis, we hope to propose a method to predict chromosome flexibility only based on genomic sequence data. Method: We propose a new method (named "DeepCFP") using deep learning models to predict chromosome flexibility based on only genomic sequence features. The model has been tested in the GM12878 cell line. Results: The maximum accuracy of our model has reached 91%. The performance of DeepCFP is close to FRI and GNM. Conclusion: The DeepCFP can achieve high performance only based on genomic sequence.


Author(s):  
Indar Sugiarto ◽  
Doddy Prayogo ◽  
Henry Palit ◽  
Felix Pasila ◽  
Resmana Lim ◽  
...  

This paper describes a prototype of a computing platform dedicated to artificial intelligence explorations. The platform, dubbed as PakCarik, is essentially a high throughput computing platform with GPU (graphics processing units) acceleration. PakCarik is an Indonesian acronym for Platform Komputasi Cerdas Ramah Industri Kreatif, which can be translated as “Creative Industry friendly Intelligence Computing Platform”. This platform aims to provide complete development and production environment for AI-based projects, especially to those that rely on machine learning and multiobjective optimization paradigms. The method for constructing PakCarik was based on a computer hardware assembling technique that uses commercial off-the-shelf hardware and was tested on several AI-related application scenarios. The testing methods in this experiment include: high-performance lapack (HPL) benchmarking, message passing interface (MPI) benchmarking, and TensorFlow (TF) benchmarking. From the experiment, the authors can observe that PakCarik's performance is quite similar to the commonly used cloud computing services such as Google Compute Engine and Amazon EC2, even though falls a bit behind the dedicated AI platform such as Nvidia DGX-1 used in the benchmarking experiment. Its maximum computing performance was measured at 326 Gflops. The authors conclude that PakCarik is ready to be deployed in real-world applications and it can be made even more powerful by adding more GPU cards in it.


2018 ◽  
Vol 55 (2) ◽  
pp. 147-168 ◽  
Author(s):  
Andrew Adams ◽  
Emma Kavanagh

High performance athletes participate and function in sports systems where exploitative behaviours may become manifest. These behaviours potentially violate an individual athlete’s human rights. Using the Capability Approach first outlined by Amartya Sen the paper details how a more precise analysis of human rights, in the context of high performance sport, may be achieved. Using in-depth narrative accounts from high performance athletes, data illustrate how athlete maltreatment is related to individual capabilities and functionings: the loss of individual freedoms infringes accepted notions of human rights. The implications for practice concern how human rights may be protected within and for systems of high performance production.


2012 ◽  
Vol 51 (05) ◽  
pp. 441-448 ◽  
Author(s):  
P. F. Neher ◽  
I. Reicht ◽  
T. van Bruggen ◽  
C. Goch ◽  
M. Reisert ◽  
...  

SummaryBackground: Diffusion-MRI provides a unique window on brain anatomy and insights into aspects of tissue structure in living humans that could not be studied previously. There is a major effort in this rapidly evolving field of research to develop the algorithmic tools necessary to cope with the complexity of the datasets.Objectives: This work illustrates our strategy that encompasses the development of a modularized and open software tool for data processing, visualization and interactive exploration in diffusion imaging research and aims at reinforcing sustainable evaluation and progress in the field.Methods: In this paper, the usability and capabilities of a new application and toolkit component of the Medical Imaging and Interaction Toolkit (MITK, www.mitk.org), MITKDI, are demonstrated using in-vivo datasets.Results: MITK-DI provides a comprehensive software framework for high-performance data processing, analysis and interactive data exploration, which is designed in a modular, extensible fashion (using CTK) and in adherence to widely accepted coding standards (e.g. ITK, VTK). MITK-DI is available both as an open source software development toolkit and as a ready-to-use in stallable application.Conclusions: The open source release of the modular MITK-DI tools will increase verifiability and comparability within the research community and will also be an important step towards bringing many of the current techniques towards clinical application.


Author(s):  
Luiz Angelo Steffenel ◽  
Manuele Kirsch Pinheiro ◽  
Lucas Vaz Peres ◽  
Damaris Kirsch Pinheiro

The exponential dissemination of proximity computing devices (smartphones, tablets, nanocomputers, etc.) raises important questions on how to transmit, store and analyze data in networks integrating those devices. New approaches like edge computing aim at delegating part of the work to devices in the “edge” of the network. In this article, the focus is on the use of pervasive grids to implement edge computing and leverage such challenges, especially the strategies to ensure data proximity and context awareness, two factors that impact the performance of big data analyses in distributed systems. This article discusses the limitations of traditional big data computing platforms and introduces the principles and challenges to implement edge computing over pervasive grids. Finally, using CloudFIT, a distributed computing platform, the authors illustrate the deployment of a real geophysical application on a pervasive network.


Sign in / Sign up

Export Citation Format

Share Document