commodity computing
Recently Published Documents


TOTAL DOCUMENTS

17
(FIVE YEARS 4)

H-INDEX

4
(FIVE YEARS 1)

2021 ◽  
Vol 251 ◽  
pp. 04006
Author(s):  
Alexander Paramonov

The Front-End Link eXchange (FELIX) system is an interface between the trigger and detector electronics and commodity switched networks for the ATLAS experiment at CERN. In preparation for the LHC Run 3, to start in 2022, the system is being installed to read out the new electromagnetic calorimeter, calorimeter trigger, and muon components being installed as part of the ongoing ATLAS upgrade programme. The detector and trigger electronic systems are largely custom and fully synchronous with respect to the 40.08 MHz clock of the Large Hadron Collider (LHC). The FELIX system uses FPGAs on server-hosted PCIe boards to pass data between custom data links connected to the detector and trigger electronics and host system memory over a PCIe interface then route data to network clients, such as the Software Readout Drivers (SW ROD), via a dedicated software platform running on these machines. The SW RODs build event fragments, buffer data, perform detector-specific processing and provide data for the ATLAS High Level Trigger. The FELIX approach takes advantage of modern FPGAs and commodity computing to reduce the system complexity and effort needed to support data acquisition systems in comparison to previous designs. Future upgrades of the experiment will introduce FELIX to read out all other detector components.


Atmosphere ◽  
2019 ◽  
Vol 10 (9) ◽  
pp. 488 ◽  
Author(s):  
Shaomeng Li ◽  
Stanislaw Jaroszynski ◽  
Scott Pearse ◽  
Leigh Orf ◽  
John Clyne

Visualization is an essential tool for analysis of data and communication of findings in the sciences, and the Earth System Sciences (ESS) are no exception. However, within ESS, specialized visualization requirements and data models, particularly for those data arising from numerical models, often make general purpose visualization packages difficult, if not impossible, to use effectively. This paper presents VAPOR: a domain-specific visualization package that targets the specialized needs of ESS modelers, particularly those working in research settings where highly-interactive exploratory visualization is beneficial. We specifically describe VAPOR’s ability to handle ESS simulation data from a wide variety of numerical models, as well as a multi-resolution representation that enables interactive visualization on very large data while using only commodity computing resources. We also describe VAPOR’s visualization capabilities, paying particular attention to features for geo-referenced data and advanced rendering algorithms suitable for time-varying, 3D data. Finally, we illustrate VAPOR’s utility in the study of a numerically- simulated tornado. Our results demonstrate both ease-of-use and the rich capabilities of VAPOR in such a use case.


2018 ◽  
Vol 17 (05) ◽  
pp. 1499-1535 ◽  
Author(s):  
Yağız Onat Yazır ◽  
Adel Guitouni ◽  
Stephen W. Neville ◽  
Roozbeh Farahbod

In this paper, we present IMPROMPTU, a distributed resource consolidation manager for larger-scale commodity computing clouds. The main contribution of this work is two-fold. First, IMPROMPTU fully distributes the responsibility of resource consolidation management among autonomous node agents that have a one-to-one mapping with the physical machines in the cloud. Second, autonomous node agents manage virtual to physical machine resource consolidation using multiple criteria decision analysis (MCDA) through PROMETHEE II method. MCDA has been previously used within the context of computing systems, particularly in fields such as multi-agent systems, data mining, and wireless communications. However, to the best of our knowledge, IMPROMPTU represents the first fully distributed MCDA approach applied to the problem of autonomous resource consolidation management for commodity computing clouds. Moreover, IMPROMPTU improves on our previous studies by introducing key extensions to enhance the granularity of the MCDA model. Simulation results show that the proposed solution provides a strong alternative to prior resource consolidation management approaches for the key industry problem of mitigating SLA violations. This establishes solid groundwork for further applications and extensions of MCDA to this important problem domain.


2016 ◽  
Vol 15 (04) ◽  
pp. 1650035 ◽  
Author(s):  
Tsung-Lung Li ◽  
Wen-Cai Lu

The structural and electronic characteristics of the intercalated monopotassium–rubrene (K1Rub) are studied. In the intercalated K1Rub, one of the two pairs of phenyl groups of rubrene is intercalated by potassium, whereas the other pair remains pristine. This structural feature facilitates the comparison of the electronic structures of the intercalated and pristine pairs of phenyl groups. It is found that, in contrast to potassium adsorption to rubrene, the potassium intercalation promotes the carbon [Formula: see text] orbitals of the intercalated pair of phenyls to participate in the electronic structures of HOMO. Additionally, this intercalated K1Rub is used as a testing vehicle to study the performance of a commodity computing cluster built to run the General Atomic and Molecular Electronic Structure System (GAMESS) simulation package. It is shown that, for many frequently encountered simulation tasks, the performance of the commodity computing cluster is comparable with a massive computing cluster. The high performance-cost-ratio of the computing clusters constructed with commodity hardware suggests a feasible alternative for research institutes to establish their computing facilities.


Author(s):  
Noa Zilberman ◽  
Andrew W. Moore ◽  
Jon A. Crowcroft

Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers.


2015 ◽  
Author(s):  
Judith Risse ◽  
Marian Thomson ◽  
Garry Blakely ◽  
Georgios Koutsovoulos ◽  
Mark Blaxter ◽  
...  

Background Second and third generation sequencing technologies have revolutionised bacterial genomics. Short-read Illumina reads result in cheap but fragmented assemblies, whereas longer reads are more expensive but result in more complete genomes. The Oxford Nanopore MinION device is a revolutionary mobile sequencer that can produce thousands of long, single molecule reads. Results We sequenced Bacteroides fragilis strain BE1 using both the Illumina MiSeq and Oxford Nanopore MinION platforms. We were able to assemble a single chromosome of 5.18 Mb, with no gaps, using publicly available software and commodity computing hardware. We identified gene rearrangements and the state of invertible promoters in the strain. Conclusions The single chromosome assembly of Bacteroides fragilis strain BE1 was achieved using only modest amounts of data, publicly available software and commodity computing hardware. This combination of technologies offers the possibility of ultra-cheap, high quality, finished bacterial genomes.


2012 ◽  
Vol 396 (5) ◽  
pp. 052058 ◽  
Author(s):  
Sverre Jarp ◽  
Alfio Lazzaro ◽  
Andrzej Nowak

Author(s):  
Vikram Jandhyala ◽  
Dipanjan Gope ◽  
Swagato Chakraborty ◽  
Xiren Wang

Large-scale public cloud commodity computing is a potential paradigm-shifter for EDA tools. However, to go beyond merely web-hosted software and to exploit the true power of on-demand scalable computing is as yet an unmet challenge on many fronts. In this paper, we examine one computationally expensive and rapidly growing area within EDA as a candidate for the cloud, namely parasitic extraction and electromagnetic field simulation. With the growing emphasis on multifunctional systems in consumer electronics around commodity chips, the need for scale and speed in such tools is paramount. We examine from three aspects the suitability of and modifications needed to accelerated multilevel algorithms in boundary element methods in order to ensure cloud deployment: scalability without hitting Amdahl’s law prematurely, fault tolerance with low time penalties in realistic computing systems, and encryption-free approaches to ensuring IP security.


Sign in / Sign up

Export Citation Format

Share Document