High-Density Computing: Efficient Versus Conventional Design

Author(s):  
Dan Comperchio ◽  
Sameer Behere

Data centers are expensive to build and operate. Large data centers cost $9–13/W to build [1] and can consume more than forty times, and up to over two hundred times, the amount of energy and resources consumed by a typical building [2], [3]. Therefore, space and energy considerations need to be accounted for when evaluating competing designs for high-performance computing (HPC) installations. This paper describes the results of an incremental cost and energy savings analysis conducted using data collected from a real-world case study to evaluate the impacts of efficient resource planning and implementing a total cost of ownership (TCO) model in the analysis of IT equipment and systems. The analysis presented demonstrates the advantages of using the latest technologies and IT strategies when planning the growth of new HPC installations at an enterprise level. The data also indicates an efficient design can significantly reduce the space, power, and cooling requirements of the HPC deployment while maintaining the performance and reliability criteria.

2021 ◽  
Author(s):  
Norah Mohammed Z. Al-Dossari ◽  
Mohamed Haouari ◽  
Mohamed Kharbeche

Multiple resource planning is a very crucial undertaking for most organizations. Apart from reducing operational complexity, multiple resource planning facilitates efficient allocation of resources, which reduces costs by minimizing the cost of tardiness and the cost for additional capacity. The current research investigates multiple resource loading problems (MRLP). MRLPs are very prevalent in today’s organizational environments and are particularly critical for organizations that handle concurrent, time-intensive, and multiple-resource projects. Using data obtained from the Ministry of Administrative Development, Labor and Social Affairs (ADLSA), a MRLP is proposed. The problem utilizes data regarding staff, time, equipment, and finance to ensure efficient resource allocation among competing projects. In particular, the research proposes a novel model and solution approach for the MRLP. Computational experiments are then performed on the model. The results show that the model performs well, even for higher instances. The positive results attest to the effectiveness of the proposed MRLP problem.


2011 ◽  
Vol 2011.21 (0) ◽  
pp. 248-251
Author(s):  
Ari YOSHII ◽  
Yosuke UDAGAWA ◽  
Masahide YANAGI ◽  
Shisei WARAGAI ◽  
Keigo MATSUO ◽  
...  

Author(s):  
Low Tang Jung ◽  
Ahmed Abba Haruna

In the computing grid environment, jobs scheduling is fundamentally the process of allocating computing jobs with choices relevant to the available resources. As the scale of grid computing system grows in size over time, exponential increase in energy consumption is foreseen. As such, large data centers (DC) are embarking on green computing initiatives to address the IT operations impact on the environment. The main component within a computing system consuming the most electricity and generating the most heat is the microprocessor. The heat generated by these high-performance microprocessors is emitting high CO2 footprint. Therefore, jobs scheduling with thermal considerations (thermal-aware) to the microprocessors is important in DC grid operations. An approach for jobs scheduling is proposed in this chapter for reducing electricity usage (green computing) in DC grid. This approach is the outcome of the R&D works based on the DC grid environment in Universiti Teknologi PETRONAS, Malaysia.


2019 ◽  
Vol 11 (21) ◽  
pp. 6150
Author(s):  
Gigliola Ausiello ◽  
Luca Di Girolamo ◽  
Antonio Marano

This paper highlights the development of strategies using a green approach that can be adopted to manage interventions to promote energy efficiency. It focuses on the result of a case study carried out on an existing residential building located in Naples, Italy. The green methodology adopted in this study met the needs and requests of the building owner, who asked for natural materials. We assessed the possibility of maximizing achievable thermal energy savings and hygrometric behavior, starting from the climatic characteristics. The first step was to evaluate the aspects related to sunshine, thermal inputs, natural lighting, and natural ventilation, and prevailing winds. Subsequently, the casing was redesigned with the aim of minimizing energy consumption by using natural materials. Such materials added value to the project by combining high performance and considerations of the residents’ health. The objective here was to identify strategies for the well-being of residents both in winter and summer, by reducing energy consumption and installation management costs as well as increasing livability.


2017 ◽  
Author(s):  
Jan Christian Kässens

Since the advent of Next Generation Sequencing (NGS) technology, the amount of data from whole genome sequencing has been rising fast. In turn, the availability of these resources led to the tapping of whole new research fields in molecular and cellular biology, producing even more data. On the other hand, the available computational power is only increasing linearly. In recent years though, special-purpose high-performance devices started to become prevalent in today’s scientific data centers, namely graphics processing units (GPUs) and, to a lesser extent, field-programmable gate arrays (FPGAs). Driven by the need for performance, developers started porting regular applications to GPU frameworks and FPGA configurations to exploit the special operations only these devices may perform in a timely manner. However, applications using both accelerator technologies are still rare. Major challenges in joint GPU/FPGA application development include the required deep knowledge of associated programming paradigms and the efficient communication both types of devices. In this work, two algorithms from bioinformatics are implemented on a custom hybrid-parallel hardware architecture and a highly concurrent software platform. It is shown that such a solution is not only possible to develop but also its ability to outperform implementations on similar- sized GPU or FPGA clusters in terms of both performance and energy consumption. Both algorithms analyze case/control data from genome- wide association studies to find interactions between two or three genes with different methods. Especially in the latter case, the newly available calculation power and method enables analyses of large data sets for the first time without occupying whole data centers for weeks. The success of the hybrid-parallel architecture proposal led to the development of a high- end array of FPGA/GPU accelerator pairs to provide even better runtimes and more possibilities.


Author(s):  
Ricardo Rivera-Lopez ◽  
Mark Kimber

The thermal management of existing data centers is centered on forced convection using air as the transport fluid. A large portion of the energy required for typical data centers is used in maintaining reasonable operating temperatures, and many have looked to liquid cooling as a promising solution to increased energy efficiency. The current work is a case study of making this transition for a single computer board. The energy savings potential is quantified and the removal of heat via liquid cooling is characterized from the chip level to the environment. A thermal solution model is developed and validated through experimentation. The experiment consists of a rack-mounted computer board to simulate a server and cold plates attached at several key locations for cooling. Multiple measurements are made to determine the amount of heat removed and power consumed in the process. The results from this study show that liquid-cooling presents an improved thermal solution to data centers and the energy savings potential is large, which improves the power usage effectiveness since power is mostly used in data processing rather than server cooling.


Author(s):  
Srinivas Yarlanki ◽  
Rajarshi Das ◽  
Hendrik Hamann ◽  
Vanessa Lopez ◽  
Andrew Stepanchuk

Energy consumption has become a critical issue for data centers, triggered by the rise in energy costs, volatility in the supply and demand of energy and the wide spread proliferation of power-hungry information technology (IT) equipment. Since nearly half the energy consumed in a data center (DC) goes towards cooling, much of the efforts in minimizing energy consumption in DCs have focused on improving the efficiency of cooling strategies by optimally provisioning the cooling power to match the heat dissipation in the entire DC. However, at a more granular level within the DC, the large range of heat densities of today’s IT equipment makes this task of provisioning cooling power at the level of individual computer room air conditioning (CRAC) units much more challenging. In this work, we employ utility functions to present a principled and flexible method for determining the optimal settings of CRACs for joint management of power and temperature objectives at a more granular level within a DC. Such provisioning of cooling power to match the heat generated at a local level requires the knowledge of thermal zones — the region of DC space cooled by a specific CRAC. We show how thermal zones can be constructed for arbitrary settings of CRACs using the potential flow theory. As a case study, we apply our methodology in a 10,000 sq. ft commercial DC using actual measured conditions and evaluate the usefulness of the method by quantifying possible energy savings in this DC.


Author(s):  
Niru Kumari ◽  
Rocky Shih ◽  
Alan McReynolds ◽  
Ratnesh Sharma ◽  
Tom Christian ◽  
...  

Airside economizers in data centers introducing outside air directly in cold aisles or at CRAC level have been considered recently to reduce overall energy to cool IT equipment. However, such designs limit the operational envelope of free cooling based on the required supply air temperature to the IT equipment. More studies are required to optimize airside economizer layouts to increase the operation time and hence, increase the energy savings. This paper presents a case study of different outside air delivery configurations including outside air introduced in cold aisles, in plenum close to CRAC units’ supply side, at return side of CRAC units and in hot aisles. The temperature and flow fields are studied numerically and are compared to each other. Mixing of the cooler outside air with the hot air is studied to determine optimal local distribution of the outside air in a non-homogeneous data center to maximize natural cooling. The paper also quantifies the annual average performance of the outside air infrastructure to include the effects of the seasonal variations in the ambient temperature.


Sign in / Sign up

Export Citation Format

Share Document