scholarly journals Reducing energy usage in resource-intensive Java-based scientific applications via micro-benchmark based code refactorings

2019 ◽  
Vol 16 (2) ◽  
pp. 541-564
Author(s):  
Mathias Longo ◽  
Ana Rodriguez ◽  
Cristian Mateos ◽  
Alejandro Zunino

In-silico research has grown considerably. Today?s scientific code involves long-running computer simulations and hence powerful computing infrastructures are needed. Traditionally, research in high-performance computing has focused on executing code as fast as possible, while energy has been recently recognized as another goal to consider. Yet, energy-driven research has mostly focused on the hardware and middleware layers, but few efforts target the application level, where many energy-aware optimizations are possible. We revisit a catalog of Java primitives commonly used in OO scientific programming, or micro-benchmarks, to identify energy-friendly versions of the same primitive. We then apply the micro-benchmarks to classical scientific application kernels and machine learning algorithms for both single-thread and multi-thread implementations on a server. Energy usage reductions at the micro-benchmark level are substantial, while for applications obtained reductions range from 3.90% to 99.18%.

2021 ◽  
Vol 2069 (1) ◽  
pp. 012153
Author(s):  
Rania Labib

Abstract Architects often investigate the daylighting performance of hundreds of design solutions and configurations to ensure an energy-efficient solution for their designs. To shorten the time required for daylighting simulations, architects usually reduce the number of variables or parameters of the building and facade design. This practice usually results in the elimination of design variables that could contribute to an energy-optimized design configuration. Therefore, recent research has focused on incorporating machine learning algorithms that require the execution of only a relatively small subset of the simulations to predict the daylighting and energy performance of buildings. Although machine learning has been shown to be accurate, it still becomes a time-consuming process due to the time required to execute a set of simulations to be used as training and validation data. Furthermore, to save time, designers often decide to use a small simulation subset, which leads to a poorly designed machine learning algorithm that produces inaccurate results. Therefore, this study aims to introduce an automated framework that utilizes high performance computing (HPC) to execute the simulations necessary for the machine learning algorithm while saving time and effort. High performance computing facilitates the execution of thousands of tasks simultaneously for a time-efficient simulation process, therefore allowing designers to increase the size of the simulation’s subset. Pairing high performance computing with machine learning allows for accurate and nearly instantaneous building performance predictions.


2021 ◽  
Vol 13(62) (2) ◽  
pp. 705-714
Author(s):  
Arpad Kerestely

Efficient High Performance Computing for Machine Learning has become a necessity in the past few years. Data is growing exponentially in domains like healthcare, government, economics and with the development of IoT, smartphones and gadgets. This big volume of data, needs a storage space which no traditional computing system can offer, and needs to be fed to Machine Learning algorithms so useful information can be extracted out of it. The larger the dataset that is fed to a Machine Learning algorithm the more precise the results will be, but also the time to compute those results will increase. Thus, the need for Efficient High Performance computing in the aid of faster and better Machine Learning algorithms. This paper aims to unveil how one benefits from another, what research has achieved so far and where is it heading.


2019 ◽  
Vol 2019 ◽  
pp. 1-19 ◽  
Author(s):  
Pawel Czarnul ◽  
Jerzy Proficz ◽  
Adam Krzywaniak

The paper presents state of the art of energy-aware high-performance computing (HPC), in particular identification and classification of approaches by system and device types, optimization metrics, and energy/power control methods. System types include single device, clusters, grids, and clouds while considered device types include CPUs, GPUs, multiprocessor, and hybrid systems. Optimization goals include various combinations of metrics such as execution time, energy consumption, and temperature with consideration of imposed power limits. Control methods include scheduling, DVFS/DFS/DCT, power capping with programmatic APIs such as Intel RAPL, NVIDIA NVML, as well as application optimizations, and hybrid methods. We discuss tools and APIs for energy/power management as well as tools and environments for prediction and/or simulation of energy/power consumption in modern HPC systems. Finally, programming examples, i.e., applications and benchmarks used in particular works are discussed. Based on our review, we identified a set of open areas and important up-to-date problems concerning methods and tools for modern HPC systems allowing energy-aware processing.


Sign in / Sign up

Export Citation Format

Share Document