Design and Application of Innovative 3DVC in AI Server System

2021 ◽  
Author(s):  
Xianguang Tan ◽  
Yongzhan He ◽  
Bin Liu ◽  
Jiang Yu ◽  
Ahuja Nishi ◽  
...  

Abstract With the accelerated application of cloud computing and artificial intelligence, the computing power and power consumption of chips are greatly enhanced, which brings severe challenges to heat dissipation. Based on this, Baidu has adopted advanced phase change cooling technology and successfully developed an innovative 3dvc air cooling scheme for AI server system. This paper introduces the design, test and verification of the innovative scheme in detail. The results show that the scheme can reduce the GPU temperature by more than 5 °C compared with the traditional heat pipe cooling scheme, save 30%+ of the fan power consumption, and achieve good cooling and energy saving effect.

Energies ◽  
2020 ◽  
Vol 13 (21) ◽  
pp. 5719
Author(s):  
JiHyun Hwang ◽  
Taewon Lee

The recent expansion of the internet network and rapid advancements in information and communication technology are expected to lead to a significant increase in power consumption and the number of data centers. However, these data centers consume a considerable amount of electric power all year round, regardless of working days or holidays; thus, energy saving at these facilities has become essential. A disproportionate level of power consumption is concentrated in computer rooms because air conditioners in these rooms are required to operate throughout the year to maintain a constant indoor environment for stable operation of computer equipment with high-heat release densities. Considerable energy-saving potential is expected in such computer rooms, which consume high levels of energy, if an outdoor air-cooling system and air conditioners are installed. These systems can reduce the indoor space temperature by introducing a relatively low outdoor air temperature. Therefore, we studied the energy-saving effect of introducing an outdoor air-cooling system in a computer room with a disorganized arrangement of servers and an inadequate air conditioning system in a research complex in Korea. The findings of this study confirmed that annual energy savings of up to approximately 40% can be achieved.


2019 ◽  
Vol 8 (4) ◽  
pp. 10089-10092

With the increasing levels of transistor count and clock rate of microprocessors there is a significant increase in power dissipation. Reducing power consumption in both high power consumption and high performance has developed into one of the main target in designing a system for various devices. As the chip multiprocessor (CMP) are integrating more cores on the die, it will leads to the extent of Large scale CMP (LCMP) architectures with potentially hundreds of thread on the die and thousands of cores. Therefore, we proposed an approach of OS level power optimization in LCMP to optimize the heat dissipation rate and increase computing power under some considerations. To satisfy the main goal of our work, the heat dissipation should be optimizing with increase in computing power. The approach of optimizing the heat dissipation is done at the synthesis level. There are three approaches for modifying the synthetic benchmark: Singly Synthesis, Hierarchical Synthesis and Group Synthesis. The result is that the power dissipation of Group synthesis is equally distributed without giving more loads to only one processor as compared to Hierarchical Synthesis and Singly Synthesis. Therefore, from our result we can conclude that in Group Synthesis power is equally distributed hence heat dissipation is optimized. The future work will be to further optimize the result of the Synthesis level using thread migration. Thread Migration can increase the system throughput; it relies on multiple cores that vary in performance capabilities


2019 ◽  
Vol 12 (1) ◽  
pp. 47-60
Author(s):  
László Kota

The artificial intelligence undergoes an enormous development since its appearance in the fifties. The computing power has grown exponentially since then, enabling the use of artificial intelligence applications in different areas. Since then, artificial intelligence applications are not only present in the industry, but they have slowly conquered households as well. Their use in logistics is becoming more and more widespread, just think of self-driving cars and trucks. In this paper, the author attempts to summarize and present the artificial intelligence logistical applications, its development and impact on logistics.


2020 ◽  
Vol 2 ◽  
pp. 58-61 ◽  
Author(s):  
Syed Junaid ◽  
Asad Saeed ◽  
Zeili Yang ◽  
Thomas Micic ◽  
Rajesh Botchu

The advances in deep learning algorithms, exponential computing power, and availability of digital patient data like never before have led to the wave of interest and investment in artificial intelligence in health care. No radiology conference is complete without a substantial dedication to AI. Many radiology departments are keen to get involved but are unsure of where and how to begin. This short article provides a simple road map to aid departments to get involved with the technology, demystify key concepts, and pique an interest in the field. We have broken down the journey into seven steps; problem, team, data, kit, neural network, validation, and governance.


Energies ◽  
2021 ◽  
Vol 14 (14) ◽  
pp. 4089
Author(s):  
Kaiqiang Zhang ◽  
Dongyang Ou ◽  
Congfeng Jiang ◽  
Yeliang Qiu ◽  
Longchuan Yan

In terms of power and energy consumption, DRAMs play a key role in a modern server system as well as processors. Although power-aware scheduling is based on the proportion of energy between DRAM and other components, when running memory-intensive applications, the energy consumption of the whole server system will be significantly affected by the non-energy proportion of DRAM. Furthermore, modern servers usually use NUMA architecture to replace the original SMP architecture to increase its memory bandwidth. It is of great significance to study the energy efficiency of these two different memory architectures. Therefore, in order to explore the power consumption characteristics of servers under memory-intensive workload, this paper evaluates the power consumption and performance of memory-intensive applications in different generations of real rack servers. Through analysis, we find that: (1) Workload intensity and concurrent execution threads affects server power consumption, but a fully utilized memory system may not necessarily bring good energy efficiency indicators. (2) Even if the memory system is not fully utilized, the memory capacity of each processor core has a significant impact on application performance and server power consumption. (3) When running memory-intensive applications, memory utilization is not always a good indicator of server power consumption. (4) The reasonable use of the NUMA architecture will improve the memory energy efficiency significantly. The experimental results show that reasonable use of NUMA architecture can improve memory efficiency by 16% compared with SMP architecture, while unreasonable use of NUMA architecture reduces memory efficiency by 13%. The findings we present in this paper provide useful insights and guidance for system designers and data center operators to help them in energy-efficiency-aware job scheduling and energy conservation.


Author(s):  
Junshu Wang ◽  
Guoming Zhang ◽  
Wei Wang ◽  
Ka Zhang ◽  
Yehua Sheng

AbstractWith the rapid development of hospital informatization and Internet medical service in recent years, most hospitals have launched online hospital appointment registration systems to remove patient queues and improve the efficiency of medical services. However, most of the patients lack professional medical knowledge and have no idea of how to choose department when registering. To instruct the patients to seek medical care and register effectively, we proposed CIDRS, an intelligent self-diagnosis and department recommendation framework based on Chinese medical Bidirectional Encoder Representations from Transformers (BERT) in the cloud computing environment. We also established a Chinese BERT model (CHMBERT) trained on a large-scale Chinese medical text corpus. This model was used to optimize self-diagnosis and department recommendation tasks. To solve the limited computing power of terminals, we deployed the proposed framework in a cloud computing environment based on container and micro-service technologies. Real-world medical datasets from hospitals were used in the experiments, and results showed that the proposed model was superior to the traditional deep learning models and other pre-trained language models in terms of performance.


2014 ◽  
Vol 571-572 ◽  
pp. 105-108
Author(s):  
Lin Xu

This paper proposes a new framework of combining reinforcement learning with cloud computing digital library. Unified self-learning algorithms, which includes reinforcement learning, artificial intelligence and etc, have led to many essential advances. Given the current status of highly-available models, analysts urgently desire the deployment of write-ahead logging. In this paper we examine how DNS can be applied to the investigation of superblocks, and introduce the reinforcement learning to improve the quality of current cloud computing digital library. The experimental results show that the method works more efficiency.


Sign in / Sign up

Export Citation Format

Share Document