computer clusters
Recently Published Documents


TOTAL DOCUMENTS

142
(FIVE YEARS 25)

H-INDEX

15
(FIVE YEARS 1)

2021 ◽  
Vol 26 (jai2021.26(2)) ◽  
pp. 111-119
Author(s):  
Ashursky E ◽  
◽  

To date the recognition of universal, a priori inherent in them connection between the objects of the world around us is quite rightly considered almost an accomplished fact. But on what laws do these or those sometimes rather variegated systems function in live and inert nature (including - in modern computer clusters)? Where are the origins of their self-organization activity lurked: whether at the level of still hypothetical quantum-molecular models, finite bio-automata or hugely fashionable now artificial neural networks? Answers to all these questions if perhaps will ever appear then certainly not soon. That is why the bold innovative developments presented in following article are capable in something, possibly, even to refresh the database of informatics so familiar to many of us. And moreover, in principle, the pivotal idea developed here, frankly speaking, is quite simple in itself: if, for example, the laws of the universe are one, then all the characteristic differences between any evolving objects should be determined by their outwardly-hidden informative (or, according to author’s terminology - “mental") rationale. By the way, these are not at all empty words, as it might seem at first glance, because they are fully, where possible, supported with the generally accepted physical & mathematical foundation here. So as a result, the reader by himself comes sooner or later to the inevitable conclusion, to wit: only the smallest electron-neutrino ensembles contain everything the most valuable and meaningful for any natural system! At that even no matter, what namely global outlook paradigm we here hold


2021 ◽  
Author(s):  
Emir Ashursky

To date the recognition of universal, a priori inherent in them connection between the objects of the world around us is quite rightly considered almost an accomplished fact. But on what laws do these or those sometimes rather variegated systems function in live and inert nature (including - in modern computer clusters)? Where are the origins of their self-organization activity lurked: whether at the level of still hypothetical quantum-molecular models, finite bio-automata or hugely fashionable now artificial neural networks? Answers to all these questions if perhaps will ever appear then certainly not soon. That is why the bold innovative developments presented in following article are capable in something, possibly, even to refresh the database of informatics so familiar to many of us. And moreover, in principle, the pivotal idea developed here, frankly speaking, is quite simple in itself: if, for example, the laws of the universe are one, then all the characteristic differences between any evolving objects should be determined by their outwardly-hidden informative (or, according to author’s terminology - “mental") rationale. By the way, these are not at all empty words, as it might seem at first glance, because they are fully, where possible, supported with the generally accepted physical & mathematical foundation here. So as a result, the reader by himself comes sooner or later to the inevitable conclusion, to wit: only the smallest electron-neutrino ensembles contain everything the most valuable and meaningful for any natural system! At that even no matter, what namely global outlook paradigm we here hold...


2021 ◽  
Author(s):  
Vasileios Rantos ◽  
Kai Karius ◽  
Jan Kosinski

Integrative modelling enables structure determination of macromolecular complexes by combining data from multiple experimental sources such as X-ray crystallography, electron microscopy (EM), or crosslinking mass spectrometry (XL-MS). It is particularly useful for complexes not amenable to high-resolution EM-complexes that are flexible, heterogenous, or imaged in cells with cryo-electron tomography. We have recently developed an integrative modelling protocol that allowed us to model multi-megadalton complexes as large as the nuclear pore complex. Here, we describe the Assembline software package, which combines multiple programs and libraries with our own algorithms in a streamlined modelling pipeline. Assembline builds ensembles of models satisfying data from atomic structures or homology models, EM maps and other experimental data, and provides tools for their analysis. Comparing to other methods, Assembline enables efficient sampling of conformational space through a multi-step procedure, provides new modeling restraints, and includes a unique configuration system for setting up the modelling project. Our protocol achieves exhaustive sampling in less than 100 - 1,000 CPU-hours even for complexes in the megadalton range. For larger complexes, resources available in institutional or public computer clusters are needed and sufficient to run the protocol. We also provide step-by-step instructions for preparing the input, running the core modelling steps, and assessing modelling performance at any stage.


Author(s):  
Stefano Colafranceschi ◽  
Emanuele De Biase

The computational capabilities of commercial CPUs and GPUs reached a plateau but soft-ware applications are usually memory-intense tasks and they commonly need/utilize most recent hardware developments. Computer clusters are an expensive solution, although reliable and versatile, with a limited market share for small colleges. Small schools would typically rely on cloud-based systems because they are more afford-able (less expensive), manageable (no need to worry about the maintenance), and easier to implement (the burden is shifted into the datacenter). Here we provide arguments in favor of an on-campus hardware solution, which, while providing benefits for students, does not present the financial burden associated with larger and more powerful computer clus-ters. We think that instructors of engineering/computer science faculties might find this a viable and workable solution to improve the computing environment of their school without incurring the high cost of a ready-made solution. At the basis of this proposal is the acquisition of inexpensive refurbished hardware and of a type1 VMware hypervisor with a free licensing, as well as of a custom-made web plat-form to control the deployed hypervisors. VMware is a global leader in cloud infrastruc-ture and software-based solutions. In particular, the adoption of a customized "Elastic Sky X integrated" as hypervisor together with Virtual Operating Systems installed in the very same datastore, would constitute an interesting and working proof-of-concept achieving a computer cluster at a fraction of the market cost.


2020 ◽  
pp. 59-63 ◽  
Author(s):  
O.K. Vynnyk ◽  
I.O. Anisimov

Original code optimized forsimulation of interactions between plasma and charged particles beams and bunches, based on particle in cell method described. The code is electromagnetic and fully relativistic with 2.5D axial symmetric geometry. Binary coulomb particle collisions are taken into account. The code is fully parallelized and designed for computer systems with shared memory. Ability to extend supported platforms to systems with distributed memory (like computer clusters or grids) is embedded into code architecture.


2020 ◽  
Vol 38 (2) ◽  
Author(s):  
Razec Cezar Sampaio Pinto da Silva Torres ◽  
Leandro Di Bartolo

ABSTRACT. Reverse time migration (RTM) is one of the most powerful methods used to generate images of the subsurface. The RTM was proposed in the early 1980s, but only recently it has been routinely used in exploratory projects involving complex geology – Brazilian pre-salt, for example. Because the method uses the two-way wave equation, RTM is able to correctly image any kind of geological environment (simple or complex), including those with anisotropy. On the other hand, RTM is computationally expensive and requires the use of computer clusters. This paper proposes to investigate the influence of anisotropy on seismic imaging through the application of RTM for tilted transversely isotropic (TTI) media in pre-stack synthetic data. This work presents in detail how to implement RTM for TTI media, addressing the main issues and specific details, e.g., the computational resources required. A couple of simple models results are presented, including the application to a BP TTI 2007 benchmark model.Keywords: finite differences, wave numerical modeling, seismic anisotropy. Migração reversa no tempo em meios transversalmente isotrópicos inclinadosRESUMO. A migração reversa no tempo (RTM) é um dos mais poderosos métodos utilizados para gerar imagens da subsuperfície. A RTM foi proposta no início da década de 80, mas apenas recentemente tem sido rotineiramente utilizada em projetos exploratórios envolvendo geologia complexa, em especial no pré-sal brasileiro. Por ser um método que utiliza a equação completa da onda, qualquer configuração do meio geológico pode ser corretamente tratada, em especial na presença de anisotropia. Por outro lado, a RTM é dispendiosa computacionalmente e requer o uso de clusters de computadores por parte da indústria. Este artigo apresenta em detalhes uma implementação da RTM para meios transversalmente isotrópicos inclinados (TTI), abordando as principais dificuldades na sua implementação, além dos recursos computacionais exigidos. O algoritmo desenvolvido é aplicado a casos simples e a um benchmark padrão, conhecido como BP TTI 2007.Palavras-chave: diferenças finitas, modelagem numérica de ondas, anisotropia sísmica.


2020 ◽  
Author(s):  
João Pedro Saraiva ◽  
Marta Gomes ◽  
René Kallies ◽  
Carsten Vogt ◽  
Antonis Chatzinotas ◽  
...  

Abstract Background: The exponential increase in high-throughput sequencing data and the development of computational sciences and bioinformatics pipelines has advanced our understanding of microbial community composition and distribution in complex ecosystems. Despite these advances, the identification of microbial interactions from genomic data remains a major bottleneck. To address this challenge, we present OrtSuite, a flexible workflow to predict putative microbial interactions based on genomic content. Results: OrtSuite combines ortholog clustering strategies with genome annotation based on a user-defined set of functions allowing for hypothesis-driven data analysis. OrtSuit allows users to install and run all workflow components and analyze the generated outputs using a simple pipeline consisting of 23 bash commands and one R command. Annotation is based on a two-stage process. First, only a subset of sequences from each ortholog cluster are aligned to all sequences in the Ortholog-Reaction Association database (ORAdb). Next, all sequences from clusters that meet a user-defined identity threshold are aligned to all sequence sets in ORAdb to which they had a hit. This approach results in a decrease in time needed for functional annotation. Further, OrtSuit identifies putative interspecies interactions based on their individual genomic content based on constrains given by the users. Additional control is afforded to the user at several stages of the workflow: 1) The construction of ORAdb only needs to be performed once for each specific process also allowing manual curation; 2) The identity and sequence similarity thresholds used during the annotation stage can be adjusted; and 3) Constraints related to pathway reaction composition and known species contributions to ecosystem processes can be defined. Conclusions: OrtSuit is an easy to use workflow that allows for rapid functional annotation based on a user curated database. Further, this novel workflow allows the identification of interspecies interactions through user-defined constrains. Due to its low computational demands, for small datasets (e.g. maximum 100 genomes) OrtSuit can run on a personal computer. For larger datasets (> 100 genomes), we suggest the use of computer clusters. OrtSuit is an open-source software available at https://github.com/mdsufz/OrtSuit .


Sign in / Sign up

Export Citation Format

Share Document