Thermodynamic Characterization of a Direct Water Cooled Server Rack Running Synthetic and Real High Performance Computing Work Loads

Author(s):  
Lynn Parnell ◽  
Garrison Vaughan ◽  
John Thompson ◽  
Daniel Duffy ◽  
Louis Capps ◽  
...  

High performance computing server racks are being engineered to contain significantly more processing capability within the same computer room footprint year after year. The processor density within a single rack is becoming high enough that traditional, inefficient air-cooling of servers is inadequate to sustain HPC workloads. Experiments that characterize the performance of a direct water-cooled server rack in an operating HPC facility are described in this paper. Performance of the rack is reported for a range of cooling water inlet temperatures, flow rates and workloads that include actual and worst-case synthetic benchmarks. Power and temperature measurements of all processors and memory components in the rack were made while extended benchmark tests were conducted throughout the range of cooling variables allowed within an operational HPC facility. Synthetic benchmark results were compared with those obtained on a single server of the same design that had been characterized thermodynamically. Neither actual nor synthetic benchmark performances were affected during the course of the experiments, varying less than 0.13 percent. Power consumption change in the rack was minimal for the entire excursion of coolant temperatures and flow rates. Establishing the characteristics of such a highly energy efficient server rack in situ is critical to determine how the technology might be integrated into an existing heterogeneous, hybrid cooled computing facility — i.e., a facility that includes some servers that are air cooled as well as some that are direct water cooled.

2018 ◽  
Vol 129 (4) ◽  
pp. 1067-1077 ◽  
Author(s):  
Sofy H. Weisenberg ◽  
Stephanie C. TerMaath ◽  
Charlotte N. Barbier ◽  
Judith C. Hill ◽  
James A. Killeffer

OBJECTIVECerebrospinal fluid (CSF) shunts are the primary treatment for patients suffering from hydrocephalus. While proven effective in symptom relief, these shunt systems are plagued by high failure rates and often require repeated revision surgeries to replace malfunctioning components. One of the leading causes of CSF shunt failure is obstruction of the ventricular catheter by aggregations of cells, proteins, blood clots, or fronds of choroid plexus that occlude the catheter’s small inlet holes or even the full internal catheter lumen. Such obstructions can disrupt CSF diversion out of the ventricular system or impede it entirely. Previous studies have suggested that altering the catheter’s fluid dynamics may help to reduce the likelihood of complete ventricular catheter failure caused by obstruction. However, systematic correlation between a ventricular catheter’s design parameters and its performance, specifically its likelihood to become occluded, still remains unknown. Therefore, an automated, open-source computational fluid dynamics (CFD) simulation framework was developed for use in the medical community to determine optimized ventricular catheter designs and to rapidly explore parameter influence for a given flow objective.METHODSThe computational framework was developed by coupling a 3D CFD solver and an iterative optimization algorithm and was implemented in a high-performance computing environment. The capabilities of the framework were demonstrated by computing an optimized ventricular catheter design that provides uniform flow rates through the catheter’s inlet holes, a common design objective in the literature. The baseline computational model was validated using 3D nuclear imaging to provide flow velocities at the inlet holes and through the catheter.RESULTSThe optimized catheter design achieved through use of the automated simulation framework improved significantly on previous attempts to reach a uniform inlet flow rate distribution using the standard catheter hole configuration as a baseline. While the standard ventricular catheter design featuring uniform inlet hole diameters and hole spacing has a standard deviation of 14.27% for the inlet flow rates, the optimized design has a standard deviation of 0.30%.CONCLUSIONSThis customizable framework, paired with high-performance computing, provides a rapid method of design testing to solve complex flow problems. While a relatively simplified ventricular catheter model was used to demonstrate the framework, the computational approach is applicable to any baseline catheter model, and it is easily adapted to optimize catheters for the unique needs of different patients as well as for other fluid-based medical devices.


Author(s):  
Suchismita Sarangi ◽  
Will A. Kuhn ◽  
Scott Rider ◽  
Claude Wright ◽  
Shankar Krishnan

Efficient and compact cooling technologies play a pivotal role in determining the performance of high performance computing devices when used with highly parallel workloads in supercomputers. The present work deals with evaluation of different cooling technologies and elucidating their impact on the power, performance, and thermal management of Intel® Xeon Phi™ coprocessors. The scope of the study is to demonstrate enhanced cooling capabilities beyond today’s fan-driven air-cooling for use in high performance computing (HPC) technology, thereby improving the overall Performance per Watt in datacenters. The various cooling technologies evaluated for the present study include air-cooling, liquid-cooling and two-phase immersion-cooling. Air-cooling is evaluated by providing controlled airflow to a cluster of eight 300 W Xeon Phi coprocessors (7120P). For liquid-cooling, two different cold plate technologies are evaluated, viz, Formed tube cold pates and Microchannel based cold plates. Liquidcooling with water as working fluid, is evaluated on single Xeon Phi coprocessors, using inlet conditions in accordance with ASHRAE W2 and W3 class liquid cooled datacenter baselines. For immersion-cooling, a cluster of multiple Xeon Phi coprocessors is evaluated, with three different types of Integrated Heat Spreaders (IHS), viz., bare IHS, IHS with a Boiling Enhancement Coating (BEC) and IHS with BEC coated pin-fins. The entire cluster is immersed in a pool of Novec 649 (3M fluid, boiling point 49 °C at 1 atm), with polycarbonate spacers used to reduce the volume of fluid required, to achieve target fluid/power density of ∼ 3 L/kW. Flow visualization is performed to provide further insight into the boiling behavior during the immersion-cooling process. Performance per Watt of the Xeon Phi coprocessors is characterized as a function of the cooling technologies using several HPC workloads benchmark run at constant frequency, such as the Intel proprietary Power Thermal Utility (PTU), and industry standard HPC benchmarks LINPACK, DGEMM, SGEMM and STREAM. The major parameters measured by sensors on the coprocessor include total power to the coprocessor, CPU temperature, and memory temperature, while the calculated outputs of interest also include the performance per watt and equivalent thermal resistance. As expected, it is observed that both liquid and immersion cooling show improved performance per Watt and lower CPU temperature compared to air-cooling. In addition to elucidating the performance/watt improvement, this work reports on the relationship of cooling technologies on total power consumed by the Xeon-Phi card as a function of coolant inlet temperatures. Further, the paper discusses form-factor advantages to liquid and immersion cooling and compares technologies on a common platform. Finally, the paper concludes by discussing datacenter optimization for cooling in the context of leakage power control for Xeon-Phi coprocessors.


MRS Bulletin ◽  
1997 ◽  
Vol 22 (10) ◽  
pp. 5-6
Author(s):  
Horst D. Simon

Recent events in the high-performance computing industry have concerned scientists and the general public regarding a crisis or a lack of leadership in the field. That concern is understandable considering the industry's history from 1993 to 1996. Cray Research, the historic leader in supercomputing technology, was unable to survive financially as an independent company and was acquired by Silicon Graphics. Two ambitious new companies that introduced new technologies in the late 1980s and early 1990s—Thinking Machines and Kendall Square Research—were commercial failures and went out of business. And Intel, which introduced its Paragon supercomputer in 1994, discontinued production only two years later.During the same time frame, scientists who had finished the laborious task of writing scientific codes to run on vector parallel supercomputers learned that those codes would have to be rewritten if they were to run on the next-generation, highly parallel architecture. Scientists who are not yet involved in high-performance computing are understandably hesitant about committing their time and energy to such an apparently unstable enterprise.However, beneath the commercial chaos of the last several years, a technological revolution has been occurring. The good news is that the revolution is over, leading to five to ten years of predictable stability, steady improvements in system performance, and increased productivity for scientific applications. It is time for scientists who were sitting on the fence to jump in and reap the benefits of the new technology.


2001 ◽  
Author(s):  
Donald J. Fabozzi ◽  
Barney II ◽  
Fugler Blaise ◽  
Koligman Joe ◽  
Jackett Mike ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document