Processor Scheduling in High-Performance Computing (HPC) Environment

Author(s):  
Annu Priya ◽  
Sudip Kumar Sahana

Processor scheduling is one of the thrust areas in the field of computer science. The future technologies use a huge amount of processing for execution of their tasks like huge games, programming software, and in the field of quantum computing. In real-time, many complex problems are solved by GPU programming. The primary concern of scheduling is to reduce the time complexity and manpower. Several traditional techniques exit for processor scheduling. The performance of traditional techniques is reduced when it comes to the huge processing of tasks. Most scheduling problems are NP-hard in nature. Many of the complex problems are recently solved by GPU programming. GPU scheduling is another complex issue as it runs thousands of threads in parallel and needs to be scheduled efficiently. For such large-scale scheduling problems, the performance of state-of-the-art algorithms is very poor. It is observed that evolutionary and genetic-based algorithms exhibit better performance for large-scale combinatorial and internet of things (IoT) problems.

Author(s):  
Annu Priya ◽  
Sudip Kumar Sahana

Processor scheduling is one of the thrust areas in the field of computer science. The future technologies use a huge amount of processors for execution of their tasks like huge games, programming software, and in the field of quantum computing. In hard real-time, many complex problems are solved by GPU programming. The primary concern of scheduling is to reduce the time complexity and manpower. There are several traditional techniques for processor scheduling. The performance of traditional techniques is reduced when it comes under huge processing of tasks. Most scheduling problems are NP-hard in nature. Many of the complex problems are recently solved by the GPU programming. GPU scheduling is another complex issue as it runs thousands of threads in parallel and needs to be scheduled efficiently. For such large-scale scheduling problem, the performance of state-of-the-art algorithms is very poor. It is observed that evolutionary and genetic-based algorithms exhibit better performance for large-scale combinatorial problems.


Author(s):  
Vincent Breton ◽  
Eddy Caron ◽  
Frederic Desprez ◽  
Gael Le Mahec

As grids become more and more attractive for solving complex problems with high computational and storage requirements, bioinformatics starts to be ported on large scale platforms. The BLAST kernel, one of the main cornerstone of high performance genomics, was one the first application ported on such platform. However, if a simple parallelization was enough for the first proof of concept, its use in production platform needed more optimized algorithms. In this chapter, we review existing parallelization and “gridification” approaches as well as related issues such as data management and replication, and a case study using the DIET middleware over the Grid’5000 experimental platform.


2005 ◽  
Vol 16 (02) ◽  
pp. 145-162 ◽  
Author(s):  
HENRI CASANOVA

The dominant trend in scientific computing today is the establishment of platforms that span multiple institutions to support applications at unprecedented scales. On most distributed computing platforms a requirement to achieve high performance is the careful scheduling of distributed application components onto the available resources. While scheduling has been an active area of research for many decades most of the platform models traditionally used in scheduling research, and in particular network models, break down for platforms spanning wide-area networks. In this paper we examine network modeling issues for large-scale platforms from the perspective of scheduling. The main challenge we address is the development of models that are sophisticated enough to be more realistic than those traditionally used in the field, but simple enough that they are still amenable to analysis. In particular, we discuss issues of bandwidth sharing and topology modeling. Also, while these models can be used to define and reason about realistic scheduling problems, we show that they also provide a good basis for fast simulation, which is the typical method to evaluate scheduling algorithms, as demonstrated in our implementation of the SIMGRID simulation framework.


2016 ◽  
Vol 139 (1) ◽  
Author(s):  
Wataru Nakayama

Thermal management of very large-scale computers will have to leave the traditional well-beaten path. Up to the present time, the primary concern has been with rising heat flux on the integrated circuit chip, while a space has been available for the implementation of high-performance cooling design. In future systems, the spatial constraint will become a primary determinant of thermal management methodology. To corroborate this perspective, the evolution of computer's hardware morphology is simulated. Simulation tool is the geometric model, where the model structure is composed of circuit cells and platforms for circuit blocks. The cell is the minimum circuit element whose size is pegged to the technology node, while the total number of cells represents the system size. The platforms are the models of microprocessor chips, multichip modules (MCMs), and printed wiring boards (PWBs). The major points of discussion are as follows: (1) The system morphology is dictated by the competition between the progress of technology node and the demand for increase in the system size. (2) Only where the miniaturization of cells is achieved so as to deploy a system on a few PWBs, ample space is created for thermal management. (3) In the future, the cell miniaturization will hit the physical limit, while the demand for larger systems will be unabated. Liquid cooling, where the coolant is driven through very long microchannels, may provide a viable thermal solution.


Author(s):  
C.K. Wu ◽  
P. Chang ◽  
N. Godinho

Recently, the use of refractory metal silicides as low resistivity, high temperature and high oxidation resistance gate materials in large scale integrated circuits (LSI) has become an important approach in advanced MOS process development (1). This research is a systematic study on the structure and properties of molybdenum silicide thin film and its applicability to high performance LSI fabrication.


Author(s):  
В.В. ГОРДЕЕВ ◽  
В.Е. ХАЗАНОВ

При выборе типа доильной установки и ее размера необходимо учитывать максимальное планируемое поголовье дойных коров и размер технологической группы, кратность и время одного доения, продолжительность рабочей смены дояров. Анализ технико-экономических показателей наиболее распространенных на сегодняшний день типов доильных установок одинакового технического уровня свидетельствует, что наилучшие удельные показатели имеет установка типа «Карусель» (1), а установка типа «Елочка» (2) требует более высоких затрат труда и средств. Установка «Параллель» (3) занимает промежуточное положение. Из анализа пропускной способности и количества необходимых операторов: установка 2 рекомендована для ферм с поголовьем дойного стада до 600 голов, 3 — не более 1200 дойных коров, 1 — более 1200 дойных коров. «Карусель» — наиболее рациональный, высокопроизводительный, легко автоматизируемый и, следовательно, перспективный способ доения в залах, особенно для крупных молочных ферм. The choice of the proper type and size of milking installations needs to take into account the maximum planned number of dairy cows, the size of a technological group, the number of milkings per day, and the duration of one milking and the operator's working shift. The analysis of technical and economic indicators of currently most common types of milking machines of the same technical level revealed that the Carousel installation had the best specific indicators while the Herringbone installation featured higher labour inputs and cash costs. The Parallel installation was found somewhere in between. In terms of the throughput and the required number of operators Herringbone is recommended for farms with up to 600 dairy cows, Parallel — below 1200 dairy cows, Carousel — above 1200 dairy cows. Carousel was found the most practical, high-performance, easily automated and, therefore, promising milking system for milking parlours, especially on the large-scale dairy farms.


Author(s):  
Mark Endrei ◽  
Chao Jin ◽  
Minh Ngoc Dinh ◽  
David Abramson ◽  
Heidi Poxon ◽  
...  

Rising power costs and constraints are driving a growing focus on the energy efficiency of high performance computing systems. The unique characteristics of a particular system and workload and their effect on performance and energy efficiency are typically difficult for application users to assess and to control. Settings for optimum performance and energy efficiency can also diverge, so we need to identify trade-off options that guide a suitable balance between energy use and performance. We present statistical and machine learning models that only require a small number of runs to make accurate Pareto-optimal trade-off predictions using parameters that users can control. We study model training and validation using several parallel kernels and more complex workloads, including Algebraic Multigrid (AMG), Large-scale Atomic Molecular Massively Parallel Simulator, and Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. We demonstrate that we can train the models using as few as 12 runs, with prediction error of less than 10%. Our AMG results identify trade-off options that provide up to 45% improvement in energy efficiency for around 10% performance loss. We reduce the sample measurement time required for AMG by 90%, from 13 h to 74 min.


Radiation ◽  
2021 ◽  
Vol 1 (2) ◽  
pp. 79-94
Author(s):  
Peter K. Rogan ◽  
Eliseos J. Mucaki ◽  
Ben C. Shirley ◽  
Yanxin Li ◽  
Ruth C. Wilkins ◽  
...  

The dicentric chromosome (DC) assay accurately quantifies exposure to radiation; however, manual and semi-automated assignment of DCs has limited its use for a potential large-scale radiation incident. The Automated Dicentric Chromosome Identifier and Dose Estimator (ADCI) software automates unattended DC detection and determines radiation exposures, fulfilling IAEA criteria for triage biodosimetry. This study evaluates the throughput of high-performance ADCI (ADCI-HT) to stratify exposures of populations in 15 simulated population scale radiation exposures. ADCI-HT streamlines dose estimation using a supercomputer by optimal hierarchical scheduling of DC detection for varying numbers of samples and metaphase cell images in parallel on multiple processors. We evaluated processing times and accuracy of estimated exposures across census-defined populations. Image processing of 1744 samples on 16,384 CPUs required 1 h 11 min 23 s and radiation dose estimation based on DC frequencies required 32 sec. Processing of 40,000 samples at 10 exposures from five laboratories required 25 h and met IAEA criteria (dose estimates were within 0.5 Gy; median = 0.07). Geostatistically interpolated radiation exposure contours of simulated nuclear incidents were defined by samples exposed to clinically relevant exposure levels (1 and 2 Gy). Analysis of all exposed individuals with ADCI-HT required 0.6–7.4 days, depending on the population density of the simulation.


Antioxidants ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 843
Author(s):  
Tamara Ortiz ◽  
Federico Argüelles-Arias ◽  
Belén Begines ◽  
Josefa-María García-Montes ◽  
Alejandra Pereira ◽  
...  

The best conservation method for native Chilean berries has been investigated in combination with an implemented large-scale extract of maqui berry, rich in total polyphenols and anthocyanin to be tested in intestinal epithelial and immune cells. The methanolic extract was obtained from lyophilized and analyzed maqui berries using Folin–Ciocalteu to quantify the total polyphenol content, as well as 2,2-diphenyl-1-picrylhydrazyl (DPPH), ferric reducing antioxidant power (FRAP), and oxygen radical absorbance capacity (ORAC) to measure the antioxidant capacity. Determination of maqui’s anthocyanins profile was performed by ultra-high-performance liquid chromatography (UHPLC-MS/MS). Viability, cytotoxicity, and percent oxidation in epithelial colon cells (HT-29) and macrophages cells (RAW 264.7) were evaluated. In conclusion, preservation studies confirmed that the maqui properties and composition in fresh or frozen conditions are preserved and a more efficient and convenient extraction methodology was achieved. In vitro studies of epithelial cells have shown that this extract has a powerful antioxidant strength exhibiting a dose-dependent behavior. When lipopolysaccharide (LPS)-macrophages were activated, noncytotoxic effects were observed, and a relationship between oxidative stress and inflammation response was demonstrated. The maqui extract along with 5-aminosalicylic acid (5-ASA) have a synergistic effect. All of the compiled data pointed out to the use of this extract as a potential nutraceutical agent with physiological benefits for the treatment of inflammatory bowel disease (IBD).


Author(s):  
Jianglin Feng ◽  
Nathan C Sheffield

Abstract Summary Databases of large-scale genome projects now contain thousands of genomic interval datasets. These data are a critical resource for understanding the function of DNA. However, our ability to examine and integrate interval data of this scale is limited. Here, we introduce the integrated genome database (IGD), a method and tool for searching genome interval datasets more than three orders of magnitude faster than existing approaches, while using only one hundredth of the memory. IGD uses a novel linear binning method that allows us to scale analysis to billions of genomic regions. Availability https://github.com/databio/IGD


Sign in / Sign up

Export Citation Format

Share Document