Extending PowerPack for Profiling and Analysis of High-Performance Accelerator-Based Systems

2014 ◽  
Vol 24 (04) ◽  
pp. 1442001
Author(s):  
Bo Li ◽  
Hung-Ching Chang ◽  
Shuaiwen Song ◽  
Chun-Yi Su ◽  
Timmy Meyer ◽  
...  

Accelerators offer a substantial increase in efficiency for high-performance systems offering speedups for computational applications that leverage hardware support for highly-parallel codes. However, the power use of some accelerators exceeds 200 watts at idle which means use at exascale comes at a significant increase in power at a time when we face a power ceiling of about 20 megawatts. Despite the growing domination of accelerator-based systems in the Top500 and Green500 lists of fastest and most efficient supercomputers, there are few detailed studies comparing the power and energy use of common accelerators. In this work, we conduct detailed experimental studies of the power usage and distribution of Xeon-Phi-based systems in comparison to the NVIDIA Tesla and an Intel Sandy Bridge multicore host processor. In contrast to previous work, we focus on separating individual component power and correlating power use to code behavior. Our results help explain the causes of power-performance scalability for a set of HPC applications.

Author(s):  
Shuaiwen Song ◽  
Rong Ge ◽  
Xizhou Feng ◽  
Kirk W. Cameron

Future high performance systems must use energy efficiently to achieve petaFLOPS computational speeds and beyond. To address this challenge, we must first understand the power and energy characteristics of high performance computing applications. In this paper, we use a power-performance profiling framework called Power-Pack to study the power and energy profiles of the HPC Challenge benchmarks. We present detailed experimental results along with in-depth analysis of how each benchmark's workload characteristics affect power consumption and energy efficiency. This paper summarizes various findings using the HPC Challenge benchmarks, including but not limited to: 1) identifying application power profiles by function and component in a high performance cluster; 2) correlating applications' memory access patterns to power consumption for these benchmarks; and 3) exploring how energy consumption scales with system size and workload.


Author(s):  
Heather Hanson ◽  
Stephen W. Keckler ◽  
Karthick Rajamani ◽  
Soraya Ghiasi ◽  
Freeman Rawson ◽  
...  

Author(s):  
Nicholas Simos ◽  
Harold Kirk ◽  
Hans Ludewig ◽  
Peter Thieberger ◽  
W.-T. Weng ◽  
...  

Intense beams for muon colliders and neutrino facilities require high-performance target stations of 1–4 MW proton beams. The physics requirements for such a system push the envelope of our current knowledge as to how materials behave under high-power beams for both short and long exposure. The success of an adopted scheme that generates, captures and guides secondary particles depends on the useful life expectancy of this critical system. To address the key technical challenges around the target of these initiatives, a set of experimental studies have either been initiated or being planned that include (a) the response and survivability of target materials intercepting intense, energetic protons, (b) the integrity of beam windows for target enclosures, (c) the effects of irradiation on the long-term integrity of candidate target and focusing element materials, and (d) the performance of the integrated system and the assessment of its useful life. This paper presents an overview of what has been achieved during the various phases of the experimental effort including a tentative plan to continue the effort by expanding the material matrix. The paper also attempts to interpret what the experimental results are revealing and seeks for ways to extrapolate to the required intensities and anticipated levels of irradiation and it discusses the feasibility of the proposed approaches to achieving such high-performance systems. Further it explores the connection of accelerator target systems with reactor systems in order to utilize experience data that the nuclear reactor sector has acquired over the years.


2020 ◽  
Vol 13 (2) ◽  
pp. 105-109
Author(s):  
E. S. Dremicheva

This paper presents a method of sorption using peat for elimination of emergency spills of crude oil and petroleum products and the possibility of energy use of oil-saturated peat. The results of assessment of the sorbent capacity of peat are presented, with waste motor oil and diesel fuel chosen as petroleum products. Natural peat has been found to possess sorption properties in relation to petroleum products. The sorbent capacity of peat can be observed from the first minutes of contact with motor oil and diesel fuel, and significantly depends on their viscosity. For the evaluation of thermal properties of peat saturated with petroleum products, experimental studies have been conducted on determination of moisture and ash content of as-fired fuel. It is shown that adsorbed oil increases the moisture and ash content of peat in comparison with the initial sample. Therefore, when intended for energy use, peat saturated with petroleum products is to be subjected to additional drying. Simulation of net calorific value has been performed based on the calorific values of peat and petroleum products with different ratios of petroleum product content in peat and for a saturated peat sample. The obtained results are compared with those of experiments conducted in a calorimetric bomb and recalculated for net calorific value. A satisfactory discrepancy is obtained, which amounts to about 12%. Options have been considered providing for combustion of saturated peat as fuel (burnt per se and combined with a solid fuel) and processing it to produce liquid, gaseous and solid fuels. Peat can be used to solve environmental problems of elimination of emergency spills of crude oil and petroleum products and as an additional resource in solving the problem of finding affordable energy.


2020 ◽  
Vol 96 (3s) ◽  
pp. 585-588
Author(s):  
С.Е. Фролова ◽  
Е.С. Янакова

Предлагаются методы построения платформ прототипирования высокопроизводительных систем на кристалле для задач искусственного интеллекта. Изложены требования к платформам подобного класса и принципы изменения проекта СнК для имплементации в прототип. Рассматриваются методы отладки проектов на платформе прототипирования. Приведены результаты работ алгоритмов компьютерного зрения с использованием нейросетевых технологий на FPGA-прототипе семантических ядер ELcore. Methods have been proposed for building prototyping platforms for high-performance systems-on-chip for artificial intelligence tasks. The requirements for platforms of this class and the principles for changing the design of the SoC for implementation in the prototype have been described as well as methods of debugging projects on the prototyping platform. The results of the work of computer vision algorithms using neural network technologies on the FPGA prototype of the ELcore semantic cores have been presented.


Author(s):  
Mark Endrei ◽  
Chao Jin ◽  
Minh Ngoc Dinh ◽  
David Abramson ◽  
Heidi Poxon ◽  
...  

Rising power costs and constraints are driving a growing focus on the energy efficiency of high performance computing systems. The unique characteristics of a particular system and workload and their effect on performance and energy efficiency are typically difficult for application users to assess and to control. Settings for optimum performance and energy efficiency can also diverge, so we need to identify trade-off options that guide a suitable balance between energy use and performance. We present statistical and machine learning models that only require a small number of runs to make accurate Pareto-optimal trade-off predictions using parameters that users can control. We study model training and validation using several parallel kernels and more complex workloads, including Algebraic Multigrid (AMG), Large-scale Atomic Molecular Massively Parallel Simulator, and Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. We demonstrate that we can train the models using as few as 12 runs, with prediction error of less than 10%. Our AMG results identify trade-off options that provide up to 45% improvement in energy efficiency for around 10% performance loss. We reduce the sample measurement time required for AMG by 90%, from 13 h to 74 min.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4425
Author(s):  
Ana María Pineda-Reyes ◽  
María R. Herrera-Rivera ◽  
Hugo Rojas-Chávez ◽  
Heriberto Cruz-Martínez ◽  
Dora I. Medina

Monitoring and detecting carbon monoxide (CO) are critical because this gas is toxic and harmful to the ecosystem. In this respect, designing high-performance gas sensors for CO detection is necessary. Zinc oxide-based materials are promising for use as CO sensors, owing to their good sensing response, electrical performance, cost-effectiveness, long-term stability, low power consumption, ease of manufacturing, chemical stability, and non-toxicity. Nevertheless, further progress in gas sensing requires improving the selectivity and sensitivity, and lowering the operating temperature. Recently, different strategies have been implemented to improve the sensitivity and selectivity of ZnO to CO, highlighting the doping of ZnO. Many studies concluded that doped ZnO demonstrates better sensing properties than those of undoped ZnO in detecting CO. Therefore, in this review, we analyze and discuss, in detail, the recent advances in doped ZnO for CO sensing applications. First, experimental studies on ZnO doped with transition metals, boron group elements, and alkaline earth metals as CO sensors are comprehensively reviewed. We then focused on analyzing theoretical and combined experimental–theoretical studies. Finally, we present the conclusions and some perspectives for future investigations in the context of advancements in CO sensing using doped ZnO, which include room-temperature gas sensing.


2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Benjamin H. Weinberg ◽  
Jang Hwan Cho ◽  
Yash Agarwal ◽  
N. T. Hang Pham ◽  
Leidy D. Caraballo ◽  
...  

Abstract Site-specific DNA recombinases are important genome engineering tools. Chemical- and light-inducible recombinases, in particular, enable spatiotemporal control of gene expression. However, inducible recombinases are scarce due to the challenge of engineering high performance systems, thus constraining the sophistication of genetic circuits and animal models that can be created. Here we present a library of >20 orthogonal inducible split recombinases that can be activated by small molecules, light and temperature in mammalian cells and mice. Furthermore, we engineer inducible split Cre systems with better performance than existing systems. Using our orthogonal inducible recombinases, we create a genetic switchboard that can independently regulate the expression of 3 different cytokines in the same cell, a tripartite inducible Flp, and a 4-input AND gate. We quantitatively characterize the inducible recombinases for benchmarking their performances, including computation of distinguishability of outputs. This library expands capabilities for multiplexed mammalian gene expression control.


Processes ◽  
2018 ◽  
Vol 6 (8) ◽  
pp. 124 ◽  
Author(s):  
Kevin Hinkle ◽  
Xiaoyu Wang ◽  
Xuehong Gu ◽  
Cynthia Jameson ◽  
Sohail Murad

In this report we have discussed the important role of molecular modeling, especially the use of the molecular dynamics method, in investigating transport processes in nanoporous materials such as membranes. With the availability of high performance computers, molecular modeling can now be used to study rather complex systems at a fraction of the cost or time requirements of experimental studies. Molecular modeling techniques have the advantage of being able to access spatial and temporal resolution which are difficult to reach in experimental studies. For example, sub-Angstrom level spatial resolution is very accessible as is sub-femtosecond temporal resolution. Due to these advantages, simulation can play two important roles: Firstly because of the increased spatial and temporal resolution, it can help understand phenomena not well understood. As an example, we discuss the study of reverse osmosis processes. Before simulations were used it was thought the separation of water from salt was purely a coulombic phenomenon. However, by applying molecular simulation techniques, it was clearly demonstrated that the solvation of ions made the separation in effect a steric separation and it was the flux which was strongly affected by the coulombic interactions between water and the membrane surface. Additionally, because of their relatively low cost and quick turnaround (by using multiple processor systems now increasingly available) simulations can be a useful screening tool to identify membranes for a potential application. To this end, we have described our studies in determining the most suitable zeolite membrane for redox flow battery applications. As computing facilities become more widely available and new computational methods are developed, we believe molecular modeling will become a key tool in the study of transport processes in nanoporous materials.


Sign in / Sign up

Export Citation Format

Share Document