scholarly journals Optimizing OpenStack Nova for Scientific Workloads

2019 ◽  
Vol 214 ◽  
pp. 07031
Author(s):  
Belmiro Moreira ◽  
Spyridon Trigazis ◽  
Theodoros Tsioutsias

The CERN OpenStack cloud provides over 300,000 CPU cores to run data processing analyses for the Large Hadron Collider (LHC) experiments. To deliver these services, with high performance and reliable service levels, while at the same time ensuring a continuous high resource utilization has been one of the major challenges for the CERN cloud engineering team. Several optimizations like NUMA-aware scheduling and huge pages, have been deployed to improve scientific workloads performance, but the CERN Cloud team continues to explore new possibilities like preemptible instances and containers on bare-metal. In this paper we will dive into the concept and implementation challenges of preemptible instances and containers on bare-metal for scientific workloads. We will also explore how they can improve scientific workloads throughput and infrastructure resource utilization. We will present the ongoing collaboration with the Square Kilometer Array (SKA) community to develop the necessary upstream enhancement to further improve OpenStack Nova to support large-scale scientific workloads.

2012 ◽  
Vol 4 (3) ◽  
pp. 373-378 ◽  
Author(s):  
Yongwei Zhang ◽  
Anthony K. Brown

This paper describes the design of high-performance compact aperture array antennas for radio astronomy and other applications. Three recent antenna developments for square kilometer array design study (SKADS) have been investigated and the performances are compared. In addition to the radio frequency (RF) performance, an essential requirement for the square kilometer array application is the cost per square area. Based on initial large–scale finite array studies, prototypes with different geometries have been fabricated and measured. Guidelines are derived for large–scale wide–band dual-polarized array designs in applications where low cross-polarization and a wide range of scan angles are required.


Author(s):  
P. KARTHIKEYAN ◽  
M. GAUTHAM ◽  
R. RAMAKRISHNAN ◽  
A. MOOKKAIYA

This paper presents the Field Programmable Gate Array (FPGA) implementation of Bilateral Filter, in order to achieve high performance and low power consumption. Bilateral filtering is a technique to smooth images while preserving edges by means of a nonlinear combination of nearby image values. This method is nonlinear, local, and simple. We give an idea that bilateral filtering can be accelerated by bilateral grid scheme that enables fast edge-aware image processing. Nowadays, most of the applications require real time hardware systems with large computing potentiality for which fast and dedicated Very Large Scale Integration (VLSI) architecture appears to be the best possible solution. While it ensures high resource utilization, that too in cost effective platforms like FPGA, designing such architecture does offers some flexibilities like speeding up the computation by adapting more pipelined structures and parallel processing possibilities of reduced memory consumptions. Here we have developed an effective approach of bilateral filter implementation in Spartan-3 FPGA.


Author(s):  
George Baciu ◽  
Yungzhe Wang ◽  
Chenhui Li

Hardware virtualization has enabled large scale computational service delivery models with high cost leverage and improved resource utilization on cloud computing platforms. This has completely changed the landscape of computing in the last decade. It has also enabled large–scale data analytics through distributed high performance computing. Due to the infrastructure complexity, end–users and administrators of cloud platforms can rarely obtain a full picture of the state of cloud computing systems and data centers. Recent monitoring tools enable users to obtain large amounts of data with respect to many utilization parameters of cloud platforms. However, they fail to get the maximal overall insight into the resource utilization dynamics of cloud platforms. Furthermore, existing tools make it difficult to observe large-scale patterns, making it difficult to learn from the past behavior of cloud system dynamics. In this work, the authors describe a perceptual-based interactive visualization platform that gives users and administrators a cognitive view of cloud computing system dynamics.


Author(s):  
George Baciu ◽  
Yungzhe Wang ◽  
Chenhui Li

Hardware virtualization has enabled large scale computational service delivery models with high cost leverage and improved resource utilization on cloud computing platforms. This has completely changed the landscape of computing in the last decade. It has also enabled large–scale data analytics through distributed high performance computing. Due to the infrastructure complexity, end–users and administrators of cloud platforms can rarely obtain a full picture of the state of cloud computing systems and data centers. Recent monitoring tools enable users to obtain large amounts of data with respect to many utilization parameters of cloud platforms. However, they fail to get the maximal overall insight into the resource utilization dynamics of cloud platforms. Furthermore, existing tools make it difficult to observe large-scale patterns, making it difficult to learn from the past behavior of cloud system dynamics. In this work, the authors describe a perceptual-based interactive visualization platform that gives users and administrators a cognitive view of cloud computing system dynamics.


Author(s):  
C.K. Wu ◽  
P. Chang ◽  
N. Godinho

Recently, the use of refractory metal silicides as low resistivity, high temperature and high oxidation resistance gate materials in large scale integrated circuits (LSI) has become an important approach in advanced MOS process development (1). This research is a systematic study on the structure and properties of molybdenum silicide thin film and its applicability to high performance LSI fabrication.


Author(s):  
В.В. ГОРДЕЕВ ◽  
В.Е. ХАЗАНОВ

При выборе типа доильной установки и ее размера необходимо учитывать максимальное планируемое поголовье дойных коров и размер технологической группы, кратность и время одного доения, продолжительность рабочей смены дояров. Анализ технико-экономических показателей наиболее распространенных на сегодняшний день типов доильных установок одинакового технического уровня свидетельствует, что наилучшие удельные показатели имеет установка типа «Карусель» (1), а установка типа «Елочка» (2) требует более высоких затрат труда и средств. Установка «Параллель» (3) занимает промежуточное положение. Из анализа пропускной способности и количества необходимых операторов: установка 2 рекомендована для ферм с поголовьем дойного стада до 600 голов, 3 — не более 1200 дойных коров, 1 — более 1200 дойных коров. «Карусель» — наиболее рациональный, высокопроизводительный, легко автоматизируемый и, следовательно, перспективный способ доения в залах, особенно для крупных молочных ферм. The choice of the proper type and size of milking installations needs to take into account the maximum planned number of dairy cows, the size of a technological group, the number of milkings per day, and the duration of one milking and the operator's working shift. The analysis of technical and economic indicators of currently most common types of milking machines of the same technical level revealed that the Carousel installation had the best specific indicators while the Herringbone installation featured higher labour inputs and cash costs. The Parallel installation was found somewhere in between. In terms of the throughput and the required number of operators Herringbone is recommended for farms with up to 600 dairy cows, Parallel — below 1200 dairy cows, Carousel — above 1200 dairy cows. Carousel was found the most practical, high-performance, easily automated and, therefore, promising milking system for milking parlours, especially on the large-scale dairy farms.


2020 ◽  
Vol 26 (Supplement_1) ◽  
pp. S67-S68
Author(s):  
Jeffrey Berinstein ◽  
Shirley Cohen-Mekelburg ◽  
Calen Steiner ◽  
Megan Mcleod ◽  
Mohamed Noureldin ◽  
...  

Abstract Background High-deductible health plan (HDHP) enrollment has increased rapidly over the last decade. Patients with HDHPs are incentivized to delay or avoid necessary medical care. We aimed to quantify the out-of-pocket costs of Inflammatory Bowel Disease (IBD) patients at risk for high healthcare resource utilization and to evaluate for differences in medical service utilization according to time in insurance period between HDHP and traditional health plan (THP) enrollees. Variations in healthcare utilization according to time may suggest that these patients are delaying or foregoing necessary medical care due to healthcare costs. Methods IBD patients at risk for high resource utilization (defined as recent corticosteroid and narcotic use) continuously enrolled in an HDHP or THP from 2009–2016 were identified using the Truven Health MarketScan database. Median annual financial information was calculated. Time trends in office visits, colonoscopies, emergency department (ED) visits, and hospitalizations were evaluated using additive decomposition time series analysis. Financial information and time trends were compared between the two insurance plan groups. Results Of 605,862 with a diagnosis of IBD, we identified 13,052 patients at risk for high resource utilization with continuous insurance plan enrollment. The median annual out-of-pocket costs were higher in the HDHP group (n=524) than in the THP group (n=12,458) ($1,920 vs. $1,205, p<0.001), as was the median deductible amount ($1,015 vs $289, p<0.001), without any difference in the median annual total healthcare expenses (Figure 1). Time in insurance period had a greater influence on utilization of colonoscopies, ED visits, and hospitalization in IBD patients enrolled in HDHPs compared to THPs (Figure 2). Colonoscopies peaked in the 4th quarter, ED visits peaked in the 1st quarter, and hospitalizations peaked in the 3rd and 4th quarter. Conclusion Among IBD patients at high risk for IBD-related utilization, HDHP enrollment does not change the cost of care, but shifts healthcare costs onto patients. This may be a result of HDHPs incentivizing delays with a potential for both worse disease outcomes and financial toxicity and needs to be further examined using prospective studies.


Author(s):  
Mark Endrei ◽  
Chao Jin ◽  
Minh Ngoc Dinh ◽  
David Abramson ◽  
Heidi Poxon ◽  
...  

Rising power costs and constraints are driving a growing focus on the energy efficiency of high performance computing systems. The unique characteristics of a particular system and workload and their effect on performance and energy efficiency are typically difficult for application users to assess and to control. Settings for optimum performance and energy efficiency can also diverge, so we need to identify trade-off options that guide a suitable balance between energy use and performance. We present statistical and machine learning models that only require a small number of runs to make accurate Pareto-optimal trade-off predictions using parameters that users can control. We study model training and validation using several parallel kernels and more complex workloads, including Algebraic Multigrid (AMG), Large-scale Atomic Molecular Massively Parallel Simulator, and Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. We demonstrate that we can train the models using as few as 12 runs, with prediction error of less than 10%. Our AMG results identify trade-off options that provide up to 45% improvement in energy efficiency for around 10% performance loss. We reduce the sample measurement time required for AMG by 90%, from 13 h to 74 min.


Radiation ◽  
2021 ◽  
Vol 1 (2) ◽  
pp. 79-94
Author(s):  
Peter K. Rogan ◽  
Eliseos J. Mucaki ◽  
Ben C. Shirley ◽  
Yanxin Li ◽  
Ruth C. Wilkins ◽  
...  

The dicentric chromosome (DC) assay accurately quantifies exposure to radiation; however, manual and semi-automated assignment of DCs has limited its use for a potential large-scale radiation incident. The Automated Dicentric Chromosome Identifier and Dose Estimator (ADCI) software automates unattended DC detection and determines radiation exposures, fulfilling IAEA criteria for triage biodosimetry. This study evaluates the throughput of high-performance ADCI (ADCI-HT) to stratify exposures of populations in 15 simulated population scale radiation exposures. ADCI-HT streamlines dose estimation using a supercomputer by optimal hierarchical scheduling of DC detection for varying numbers of samples and metaphase cell images in parallel on multiple processors. We evaluated processing times and accuracy of estimated exposures across census-defined populations. Image processing of 1744 samples on 16,384 CPUs required 1 h 11 min 23 s and radiation dose estimation based on DC frequencies required 32 sec. Processing of 40,000 samples at 10 exposures from five laboratories required 25 h and met IAEA criteria (dose estimates were within 0.5 Gy; median = 0.07). Geostatistically interpolated radiation exposure contours of simulated nuclear incidents were defined by samples exposed to clinically relevant exposure levels (1 and 2 Gy). Analysis of all exposed individuals with ADCI-HT required 0.6–7.4 days, depending on the population density of the simulation.


Antioxidants ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 843
Author(s):  
Tamara Ortiz ◽  
Federico Argüelles-Arias ◽  
Belén Begines ◽  
Josefa-María García-Montes ◽  
Alejandra Pereira ◽  
...  

The best conservation method for native Chilean berries has been investigated in combination with an implemented large-scale extract of maqui berry, rich in total polyphenols and anthocyanin to be tested in intestinal epithelial and immune cells. The methanolic extract was obtained from lyophilized and analyzed maqui berries using Folin–Ciocalteu to quantify the total polyphenol content, as well as 2,2-diphenyl-1-picrylhydrazyl (DPPH), ferric reducing antioxidant power (FRAP), and oxygen radical absorbance capacity (ORAC) to measure the antioxidant capacity. Determination of maqui’s anthocyanins profile was performed by ultra-high-performance liquid chromatography (UHPLC-MS/MS). Viability, cytotoxicity, and percent oxidation in epithelial colon cells (HT-29) and macrophages cells (RAW 264.7) were evaluated. In conclusion, preservation studies confirmed that the maqui properties and composition in fresh or frozen conditions are preserved and a more efficient and convenient extraction methodology was achieved. In vitro studies of epithelial cells have shown that this extract has a powerful antioxidant strength exhibiting a dose-dependent behavior. When lipopolysaccharide (LPS)-macrophages were activated, noncytotoxic effects were observed, and a relationship between oxidative stress and inflammation response was demonstrated. The maqui extract along with 5-aminosalicylic acid (5-ASA) have a synergistic effect. All of the compiled data pointed out to the use of this extract as a potential nutraceutical agent with physiological benefits for the treatment of inflammatory bowel disease (IBD).


Sign in / Sign up

Export Citation Format

Share Document