scholarly journals High-Performance Multi-Rail Erasure Coding Library over Modern Data Center Architectures

Author(s):  
Haiyang Shi ◽  
Xiaoyi Lu ◽  
Dipti Shankar ◽  
Dhabaleswar K. (DK) Panda
2015 ◽  
Vol 2015 ◽  
pp. 1-8 ◽  
Author(s):  
Bin Zhou ◽  
ShuDao Zhang ◽  
Ying Zhang ◽  
JiaHao Tan

In order to achieve energy saving and reduce the total cost of ownership, green storage has become the first priority for data center. Detecting and deleting the redundant data are the key factors to the reduction of the energy consumption of CPU, while high performance stable chunking strategy provides the groundwork for detecting redundant data. The existing chunking algorithm greatly reduces the system performance when confronted with big data and it wastes a lot of energy. Factors affecting the chunking performance are analyzed and discussed in the paper and a new fingerprint signature calculation is implemented. Furthermore, a Bit String Content Aware Chunking Strategy (BCCS) is put forward. This strategy reduces the cost of signature computation in chunking process to improve the system performance and cuts down the energy consumption of the cloud storage data center. On the basis of relevant test scenarios and test data of this paper, the advantages of the chunking strategy are verified.


Author(s):  
Sheng Kang ◽  
Guofeng Chen ◽  
Chun Wang ◽  
Ruiquan Ding ◽  
Jiajun Zhang ◽  
...  

With the advent of big data and cloud computing solutions, enterprise demand for servers is increasing. There is especially high growth for Intel based x86 server platforms. Today’s datacenters are in constant pursuit of high performance/high availability computing solutions coupled with low power consumption and low heat generation and the ability to manage all of this through advanced telemetry data gathering. This paper showcases one such solution of an updated rack and server architecture that promises such improvements. The ability to manage server and data center power consumption and cooling more completely is critical in effectively managing datacenter costs and reducing the PUE in the data center. Traditional Intel based 1U and 2U form factor servers have existed in the data center for decades. These general purpose x86 server designs by the major OEM’s are, for all practical purposes, very similar in their power consumption and thermal output. Power supplies and thermal designs for server in the past have not been optimized for high efficiency. In addition, IT managers need to know more information about servers in order to optimize data center cooling and power use, an improved server/rack design needs to be built to take advantage of more efficient power supplies or PDU’s and more efficient means of cooling server compute resources than from traditional internal server fans. This is the constant pursuit of corporations looking at new ways to improving efficiency and gaining a competitive advantage. A new way to optimize power consumption and improve cooling is a complete redesign of the traditional server rack. Extracting internal server power supplies and server fans and centralizing these within the rack aims to achieve this goal. This type of design achieves an entirely new low power target by utilizing centralized, high efficiency PDU’s that power all servers within the rack. Cooling is improved by also utilizing large efficient rack based fans for airflow to all servers. Also, opening up the server design is to allow greater airflow across server components for improved cooling. This centralized power supply breaks through the traditional server power limits. Rack based PDU’s can adjust the power efficiency to a more optimum point. Combine this with the use of online + offline modes within one single power supply. Cold backup makes data center power to achieve optimal power efficiency. In addition, unifying the mechanical structure and thermal definitions within the rack solution for server cooling and PSU information allows IT to collect all server power and thermal information centrally for improved ease in analyzing and processing.


2020 ◽  
Vol MA2020-02 (24) ◽  
pp. 1720-1720
Author(s):  
Pengfei Cai ◽  
Dong Pan
Keyword(s):  

Author(s):  
Shuai Shao ◽  
Tianyi Gao ◽  
Huawei Yang ◽  
Jie Zhao ◽  
Jiajun Zhang

Abstract Along with advancements in microelectronics packaging, the power density of processor units has steadily increased over time. Data center servers equipped for high performance computing (HPC) often use multiple central processing units (CPUs) and graphical processing units (GPUs), thereby resulting in an increased power density, exceeding 1 kW per U. Many data center organizations are evaluating single phase immersion technology as a potential energy and resource saving cooling option. In this work immersion cooling was studied at a power level of 2.7kW/U with a 5U-height immersion cooling tank. Heat generated by a simulated GPU server was transferred to the secondary loop coolant, and then exchanged with the primary loop facility coolant through the heat exchanger. The chiller supply and return temperature and flow rate was controlled for the primary loop. The simulated GPU server chassis was designed to provide thermal power equivalent to a high power density server. Eight simulated power heaters, of which each unit was the size of a GPU chipset, was assembled in the comparable location to a real IT equipment on a 4U server chassis. Power for the GPU simulated chassis was able to support up to 2700 W maximum. Three investigations for this immersion cooling system evaluation were performed through comprehensive testing. The first is to identify the key decision making factor(s) for evaluating the thermal performance of 4 hydrocarbon-based dielectric coolants, including power parametric analysis, transient analysis, power cycling test, and fluid temperature profiling. The second is to develop an optimization strategy for the immersion system thermal performance. The third is to verify the capability of an 1U heat sink to support high density processor units over 300 W per GPU in an immersion cooling solution.


Sign in / Sign up

Export Citation Format

Share Document