Evaluation of the Impact of Direct Warm-Water Cooling of the HPC Servers on the Data Center Ecosystem

Author(s):  
Radosław Januszewski ◽  
Norbert Meyer ◽  
Joanna Nowicka
Author(s):  
Manasa Sahini ◽  
Chinmay Kshirsagar ◽  
Patrick McGinn ◽  
Dereje Agonafer

As global demand for data centers grows, so does the size and load placed on data centers, leading constraints on power and space available to the operator. Cooling power consumption is a major part of the data center energy usage. Liquid cooling technology has emerged as a viable solution in the process of optimization of the energy consumed per performance unit. In this data center rack level evaluation, 2OU (Open U) hybrid (liquid+air) cooled web servers are tested to observe the effects of warm water cooling on the server component temperatures, IT power and cooling power. The study discusses the importance of variable speed pumping in a centralized coolant configuration system. The cooling setup includes a mini rack capable of housing up to eleven hybrid cooled web servers and two heat exchangers that exhaust the heat dissipated from the servers to the environment (the test rig data center room). The centralized configuration has two redundant pumps placed in series with heat exchanger at the rack. Each server is equipped with two passive (i.e. no active pump) cold plates for cooling the CPUs while rests of the components are air cooled. Synthetic stress load has been generated on each server using stress-testing tools. Pumps in the servers are separately powered using an external power supply. The pump speed is proportional to the voltage across the armature [1]. The pump rpm has been recorded with input voltages ranging from 11V to 17V. The servers are tested for higher inlet temperatures ranging from 25°C to 45°C which falls within the ASHRAE liquid cooled envelope W4 [2]. Variable pumping has been achieved by using different input voltages at respective inlet temperatures.


Author(s):  
Apeksha D. Patil ◽  
Dhiraj B. Patil

Karaveera (Cerebra thevetia Linn.) is reported under Upavisha Dravya in classical ayurvedic pharmacopeias. It is observed that Shodhana (purification procedures) of the mool should be carried out before its internal administration. There are different Shodhana methods mentioned in Ayurveda. In this study Godugdha was used as media. The impact of Shodhana was evaluated by physico analytical study. It clearly proves physico analytical changes during Shodhana. Ashuddha Karaveera was taken on white clean cloth and they dumped in Pottali with Godugdha. Pottali was tied to middle of wooden rod dipped in Godugdha in stainless steel vessel and mild heat given to pottali in Dolayantra. Shuddha Karaveera was obtained and then washed with leuk warm water and dried. Ashuddha Karaveera contains toxin in it which was removed after Shodhana process. So that foreign matter, loss on drying was less in Shuddha Karaveera and due to Shodhan process with Godugdha total ash, acid insoluble ash was more than that of Ashuddha Karaveera.


Author(s):  
Jiawei Huang ◽  
Shiqi Wang ◽  
Shuping Li ◽  
Shaojun Zou ◽  
Jinbin Hu ◽  
...  

AbstractModern data center networks typically adopt multi-rooted tree topologies such leaf-spine and fat-tree to provide high bisection bandwidth. Load balancing is critical to achieve low latency and high throughput. Although the per-packet schemes such as Random Packet Spraying (RPS) can achieve high network utilization and near-optimal tail latency in symmetric topologies, they are prone to cause significant packet reordering and degrade the network performance. Moreover, some coding-based schemes are proposed to alleviate the problem of packet reordering and loss. Unfortunately, these schemes ignore the traffic characteristics of data center network and cannot achieve good network performance. In this paper, we propose a Heterogeneous Traffic-aware Partition Coding named HTPC to eliminate the impact of packet reordering and improve the performance of short and long flows. HTPC smoothly adjusts the number of redundant packets based on the multi-path congestion information and the traffic characteristics so that the tailing probability of short flows and the timeout probability of long flows can be reduced. Through a series of large-scale NS2 simulations, we demonstrate that HTPC reduces average flow completion time by up to 60% compared with the state-of-the-art mechanisms.


Author(s):  
Uschas Chowdhury ◽  
Manasa Sahini ◽  
Ashwin Siddarth ◽  
Dereje Agonafer ◽  
Steve Branton

Modern day data centers are operated at high power for increased power density, maintenance, and cooling which covers almost 2 percent (70 billion kilowatt-hours) of the total energy consumption in the US. IT components and cooling system occupy the major portion of this energy consumption. Although data centers are designed to perform efficiently, cooling the high-density components is still a challenge. So, alternative methods to improve the cooling efficiency has become the drive to reduce the cooling cost. As liquid cooling is more efficient for high specific heat capacity, density, and thermal conductivity, hybrid cooling can offer the advantage of liquid cooling of high heat generating components in the traditional air-cooled servers. In this experiment, a 1U server is equipped with cold plate to cool the CPUs while the rest of the components are cooled by fans. In this study, predictive fan and pump failure analysis are performed which also helps to explore the options for redundancy and to reduce the cooling cost by improving cooling efficiency. Redundancy requires the knowledge of planned and unplanned system failures. As the main heat generating components are cooled by liquid, warm water cooling can be employed to observe the effects of raised inlet conditions in a hybrid cooled server with failure scenarios. The ASHRAE guidance class W4 for liquid cooling is chosen for our experiment to operate in a range from 25°C – 45°C. The experiments are conducted separately for the pump and fan failure scenarios. Computational load of idle, 10%, 30%, 50%, 70% and 98% are applied while powering only one pump and the miniature dry cooler fans are controlled externally to maintain constant inlet temperature of the coolant. As the rest of components such as DIMMs & PCH are cooled by air, maximum utilization for memory is applied while reducing the number fans in each case for fan failure scenario. The components temperatures and power consumption are recorded in each case for performance analysis.


2014 ◽  
Vol 95 (12) ◽  
pp. 1835-1848 ◽  
Author(s):  
Michael F. Squires ◽  
Jay H. Lawrimore ◽  
Richard R. Heim ◽  
David A. Robinson ◽  
Mathieu R. Gerbush ◽  
...  

This paper describes a new snowfall index that quantifies the impact of snowstorms within six climate regions in the United States. The regional snowfall index (RSI) is based on the spatial extent of snowfall accumulation, the amount of snowfall, and the juxtaposition of these elements with population. Including population information provides a measure of the societal susceptibility for each region. The RSI is an evolution of the Northeast snowfall impact scale (NESIS), which NOAA's National Climatic Data Center began producing operationally in 2006. While NESIS was developed for storms that had a major impact in the Northeast, it includes all snowfall during the lifetime of a storm across the United States and as such can be thought of as a quasi-national index that is calibrated to Northeast snowstorms. By contrast, the RSI is a regional index calibrated to specific regions using only the snow that falls within that region. This paper describes the methodology used to compute the RSI, which requires region-specific parameters and thresholds, and its application within six climate regions in the eastern two-thirds of the nation. The process used to select the region-specific parameters and thresholds is explained. The new index has been calculated for over 580 snowstorms that occurred between 1900 and 2013 providing a century-scale historical perspective for these snowstorms. The RSI is computed for category 1 or greater storms in near–real time, usually a day after the storm has ended.


Facilities ◽  
2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Patrick T.I. Lam ◽  
Daniel Lai ◽  
Chi-Kin Leung ◽  
Wenjing Yang

Purpose As smart cities flourish amidst rapid urbanization and information and communication technology development, the demand for building more and more data centers is rising. This paper aims to examine the principal issues and considerations of data center facilities from the cost and benefit dimensions, with an aim to illustrate the approaches for maximizing the net benefits and remain “green.” Design/methodology/approach A comprehensive literature review informs the costs and benefits of data center facilities, and through a case study of a developer in Hong Kong, the significance of real estate costs is demonstrated. Findings Major corporations, establishments and governments need data centers as a mission critical facility to enable countless electronic transactions to take place any minute of the day. Their functional importance ranges from health, transport, payment, etc., all the way to entertainment activities. Some enterprises own them, whilst others use data center services on a co-location basis, in which case data centers are regarded as an investment asset. Real estate costs affect their success to a great extent, as in the case of a metropolitan where land cost forms a substantial part of the overall development cost for data centers. Research limitations/implications As the financial information of data center projects are highly sensitive due to the competitive status of the industry, a full set of numerical data is not available. Instead, the principles for a typical framework are established. Originality/value Data centers are very energy intensive, and their construction is usually fast tracked costing much to build, not to mention the high-value equipment contents housed therein. Their site locations need careful selection due to stability and security concerns. As an essential business continuity tool, the return on investment is a complex consideration, but certainly the potential loss caused by any disruption would be a huge amount. The life cycle cost and benefit considerations are revealed for this type of mission-critical facilities. Externalities are expounded, with emphasis on sustainable issues. The impact of land shortage for data center development is also demonstrated through the case of Hong Kong.


Energy ◽  
2014 ◽  
Vol 78 ◽  
pp. 384-396 ◽  
Author(s):  
Min-Hwi Kim ◽  
Sang-Woo Ham ◽  
Jun-Seok Park ◽  
Jae-Weon Jeong

2019 ◽  
Vol 31 (15) ◽  
pp. e5156 ◽  
Author(s):  
Demis Gomes ◽  
Guto Leoni Santos ◽  
Daniel Rosendo ◽  
Glauco Gonçalves ◽  
Andre Moreira ◽  
...  

Author(s):  
Dustin W. Demetriou ◽  
Vinod Kamath ◽  
Howard Mahaney

The generation-to-generation IT performance and density demands continue to drive innovation in data center cooling technologies. For many applications, the ability to efficiently deliver cooling via traditional chilled air cooling approaches has become inadequate. Water cooling has been used in data centers for more than 50 years to improve heat dissipation, boost performance and increase efficiency. While water cooling can undoubtedly have a higher initial capital cost, water cooling can be very cost effective when looking at the true lifecycle cost of a water cooled data center. This study aims at addressing how one should evaluate the true total cost of ownership for water cooled data centers by considering the combined capital and operational cost for both the IT systems and the data center facility. It compares several metrics, including return-on-investment for three cooling technologies: traditional air cooling, rack-level cooling using rear door heat exchangers and direct water cooling via cold plates. The results highlight several important variables, namely, IT power, data center location, site electric utility cost, and construction costs and how each of these influence the total cost of ownership of water cooling. The study further looks at implementing water cooling as part of a new data center construction project versus a retrofit or upgrade into an existing data center facility.


1966 ◽  
Vol 10 ◽  
pp. 684-687
Author(s):  
R. I. Keen
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document