scholarly journals Sensor Communication Model Using Cyber-Physical System Approach for Green Data Center

Author(s):  
Masnida Hussin ◽  
Raja Azlina Raja Mahmood ◽  
Mas Rina Mustaffa

Energy consumption in distributed computing system gains a lot of attention recently after its processing capacity becomes significant for better business and economic operations. Comprehensive analysis of energy efficiency in high-performance data center for distributed processing requires ability to monitor a proportion of resource utilization versus energy consumption. In order to gain green data center while sustaining computational performance, a model of energy efficient cyber-physical communication is proposed. A real-time sensor communication is used to monitor heat emitted by processors and room temperature. Specifically, our cyber-physical communication model dynamically identifies processing states in data center while implying a suitable air-conditioning temperature level. The information is then used by administration to fine-tune the room temperature according to the current processing activities. Our automated triggering approach aims to improve edge computing performance with cost-effective energy consumption. Simulation experiments show that our cyber-physical communication achieves better energy consumption and resource utilization compared with other cooling model.

2015 ◽  
Vol 2015 ◽  
pp. 1-8 ◽  
Author(s):  
Bin Zhou ◽  
ShuDao Zhang ◽  
Ying Zhang ◽  
JiaHao Tan

In order to achieve energy saving and reduce the total cost of ownership, green storage has become the first priority for data center. Detecting and deleting the redundant data are the key factors to the reduction of the energy consumption of CPU, while high performance stable chunking strategy provides the groundwork for detecting redundant data. The existing chunking algorithm greatly reduces the system performance when confronted with big data and it wastes a lot of energy. Factors affecting the chunking performance are analyzed and discussed in the paper and a new fingerprint signature calculation is implemented. Furthermore, a Bit String Content Aware Chunking Strategy (BCCS) is put forward. This strategy reduces the cost of signature computation in chunking process to improve the system performance and cuts down the energy consumption of the cloud storage data center. On the basis of relevant test scenarios and test data of this paper, the advantages of the chunking strategy are verified.


2021 ◽  
Author(s):  
Md. Alamgir Kabir ◽  
Shahadat Hossain ◽  
Mohammad Saiful Islam ◽  
Md. Mahmudul Hasan Imran ◽  
Mahmudur Rahman Akhanjee ◽  
...  

Author(s):  
Yao Wu ◽  
Long Zheng ◽  
Brian Heilig ◽  
Guang R Gao

As the attention given to big data grows, cluster computing systems for distributed processing of large data sets become the mainstream and critical requirement in high performance distributed system research. One of the most successful systems is Hadoop, which uses MapReduce as a programming/execution model and takes disks as intermedia to process huge volumes of data. Spark, as an in-memory computing engine, can solve the iterative and interactive problems more efficiently. However, currently it is a consensus that they are not the final solutions to big data due to a MapReduce-like programming model, synchronous execution model and the constraint that only supports batch processing, and so on. A new solution, especially, a fundamental evolution is needed to bring big data solutions into a new era. In this paper, we introduce a new cluster computing system called HAMR which supports both batch and streaming processing. To achieve better performance, HAMR integrates high performance computing approaches, i.e. dataflow fundamental into a big data solution. With more specifications, HAMR is fully designed based on in-memory computing to reduce the unnecessary disk access overhead; task scheduling and memory management are in fine-grain manner to explore more parallelism; asynchronous execution improves efficiency of computation resource usage, and also makes workload balance across the whole cluster better. The experimental results show that HAMR can outperform Hadoop MapReduce and Spark by up to 19x and 7x respectively, in the same cluster environment. Furthermore, HAMR can handle scaling data size well beyond the capabilities of Spark.


2019 ◽  
Vol 20 (2) ◽  
pp. 259-284 ◽  
Author(s):  
Pijush Kanti Dutta Pramanik ◽  
Saurabh Pal ◽  
Prasenjit Choudhury

The introduction of the Internet of Things (IoT) and Big Data applications have garnered a massive amount of digital data. Processing and analysing these data demand vast computing resources, proportionately. The major downside of producing and using computing resources in such volumes is the deterioration of the Earth's environment. The production process of the electronic devices involves hazardous and toxic substances which not only harm human and other living being’s health but also contaminate the water and soil. The production and operations of these computers in largescale also results in massive energy consumption and greenhouse gas generation. Moreover, the low use cycle of these devices produces a huge amount of not-easy-to-decompose e-waste. In this outlook, instead of buying new devices, it is advisable to use the existing resources to their fullest, which will minimize the environmental penalties of production and e-waste. This paper advocates for using smartphones and smartphone crowd computing (SCC) to ease off the use of PCs/laptops and centralized high-performance computers (HPCs) such as data centres and supercomputers. The paper aims to establish SCC as the most feasible computing system solution for sustainable computing. Detailed comparisons, in terms of environmental effects (e.g., energy consumption, greenhouse gas generation, etc.), between SCC and supercomputers and other green computing initiatives such as Grid and Cloud Computing, are presented. The key enablers of SCC are identified and discussed. One of them is today's computationally powerful smartphones. A comprehensive statistical survey of the various commercial CPUs, GPUs, SoCs for smartphones is presented confirming the capability of the SCC as an alternative to HPC. The challenges involved in realizing SCC are also considered. One of the major challenges is handling the issue of limited battery in smartphones. The reasons for battery drain are recognized with probable measures. An exhaustive survey is presented on the present and optimistic future of the continuous improvement and research on different aspects of smartphone battery and other alternative power sources which will allow users to use their smartphones for SCC without worrying about the battery running out.


Author(s):  
A. A. Zatsarinny ◽  
K. I. Volovich ◽  
S. A. Denisov ◽  
Yu. S. Ionenkov ◽  
V. A. Kondrashev

This article discusses a methodology for assessing the effectiveness of a high-performance research platform. The assessment is carried out for the example of the "Informatika" Center for Collective Use (CCU) established at the Federal Research Center of the Institute of Management of the Russian Academy of Sciences, for solving new materials synthesis problems. The main objective of the "Informatika" Center for Collective Use is to conduct research using the software and hardware of the data center of the FRC IU RAS, including for the benefit of third-party organizations and research teams. The general characteristics of the "Informatika" Center for Collective Use are presented, including the main characteristics of its scientific equipment, work organization and capabilities. The hybrid high-performance computing cluster of the FRC CSC RAS (HHPCC) is part of the data center of the FRC IU RAS and also part of the “Informatika” Center for Collective Use. HHPCC provides computing resources in the form of cloud services as software (SaaS) and platform (PaaS) services. With the aid of special technologies, scientific services are delivered to researchers in the form of subject-oriented applications. Based on the analysis of the structure and operation principles of the Informatika Center, key performance indicators of the Center have been developed taking into account its specific tasks in order to characterize its various activity aspects (development, activities and performance). CCU efficiency evaluation implies calculation, on the basis of the developed indicators, of overall (generalized) indicators that characterize the CCU operation efficiency in various areas. An integral indicator is also calculated showing the overall CCU efficiency. To develop the overall performance indicators and the integral performance indicator, it is suggested to use the methods of weighted average and analysis of hierarchies. The procedure of determining partial performance indicators has been considered. Specific features of the choice of CCU performance indicators for solving new materials synthesis problems have been identified that characterize computing complex capabilities in the creation of a virtualization environment (peak performance of a computing system, real performance of a computing system on specialized tests, equipment loading with applied tasks and program code efficiency).


Author(s):  
SIVARANJANI BALAKRISHNAN ◽  
SURENDRAN DORAISWAMY

Data centers are becoming the main backbone of and centralized repository for all cloud-accessible services in on-demand cloud computing environments. In particular, virtual data centers (VDCs) facilitate the virtualization of all data center resources such as computing, memory, storage, and networking equipment as a single unit. It is necessary to use the data center efficiently to improve its profitability. The essential factor that significantly influences efficiency is the average number of VDC requests serviced by the infrastructure provider, and the optimal allocation of requests improves the acceptance rate. In existing VDC request embedding algorithms, data center performance factors such as resource utilization rate and energy consumption are not taken into consideration. This motivated us to design a strategy for improving the resource utilization rate without increasing the energy consumption. We propose novel VDC embedding methods based on row-epitaxial and batched greedy algorithms inspired by bioinformatics. These algorithms embed new requests into the VDC while reembedding previously allocated requests. Reembedding is done to consolidate the available resources in the VDC resource pool. The experimental testbed results show that our algorithms boost the data center objectives of high resource utilization (by improving the request acceptance rate), low energy consumption, and short VDC request scheduling delay, leading to an appreciable return on investment.


2021 ◽  
Vol 12 (5) ◽  
pp. 246-259
Author(s):  
S. E. Popov ◽  
◽  
R. Yu. Zamaraev ◽  
N. I. Yukina ◽  
O. L. Giniyatullina ◽  
...  

The article presents a description of a software package for calculating displacement rates and detecting displacements of the earths surface over areas of intensive coal mining. The complex is built on the basis of the microservice architecture Docker Swarm in integration with the system of massively parallel execution of tasks Apache Spark, as a high-level tool for organizing container-type computations with orchestration of hardware resources. In the software package, the container is used as an element of the sequence of calculation stages of the mathematical model of interferometric processing, presented in the form of a managed service. The service itself is built on the basis of a microkernel of the specified operating system, with support for multitasking of process identifiers and network protocols. Due to the use of containerization of executor objects, the independence of calculations is achieved both within one pool of jobs and between different pools initialized in multi-user mode. The use of the cluster resource manage­ment system and YARN job scheduling made it possible to abstract all the computing resources of the cluster from the specific launch of jobs and to provide dispatching of distributed processing applications. The use in the program code based on the Sentinel-1 Toolbox of the possibility of storing the intermediate results of the operation of procedures in the schemes for calculating the displacement rates makes it possible to carry out calculations with various parameters, and parallelization provides a reduction in the calculation time in comparison with commercial software products. The combination of Docker Swarm and Apache Spark technologies in one software package made it possible to implement the idea of a high-performance computing system based on open source software and cross-platform programming languages Java and Python using low-budget hardware blocks, including those made in Russia.


2020 ◽  
Vol 10 (4) ◽  
pp. 32
Author(s):  
Sayed Ashraf Mamun ◽  
Alexander Gilday ◽  
Amit Kumar Singh ◽  
Amlan Ganguly ◽  
Geoff V. Merrett ◽  
...  

Servers in a data center are underutilized due to over-provisioning, which contributes heavily toward the high-power consumption of the data centers. Recent research in optimizing the energy consumption of High Performance Computing (HPC) data centers mostly focuses on consolidation of Virtual Machines (VMs) and using dynamic voltage and frequency scaling (DVFS). These approaches are inherently hardware-based, are frequently unique to individual systems, and often use simulation due to lack of access to HPC data centers. Other approaches require profiling information on the jobs in the HPC system to be available before run-time. In this paper, we propose a reinforcement learning based approach, which jointly optimizes profit and energy in the allocation of jobs to available resources, without the need for such prior information. The approach is implemented in a software scheduler used to allocate real applications from the Princeton Application Repository for Shared-Memory Computers (PARSEC) benchmark suite to a number of hardware nodes realized with Odroid-XU3 boards. Experiments show that the proposed approach increases the profit earned by 40% while simultaneously reducing energy consumption by 20% when compared to a heuristic-based approach. We also present a network-aware server consolidation algorithm called Bandwidth-Constrained Consolidation (BCC), for HPC data centers which can address the under-utilization problem of the servers. Our experiments show that the BCC consolidation technique can reduce the power consumption of a data center by up-to 37%.


2014 ◽  
Vol 2014 ◽  
pp. 1-13 ◽  
Author(s):  
Matteo Fiorani ◽  
Slavisa Aleksic ◽  
Maurizio Casoni

Current data centers networks rely on electronic switching and point-to-point interconnects. When considering future data center requirements, these solutions will raise issues in terms of flexibility, scalability, performance, and energy consumption. For this reason several optical switched interconnects, which make use of optical switches and wavelength division multiplexing (WDM), have been recently proposed. However, the solutions proposed so far suffer from low flexibility and are not able to provide service differentiation. In this paper we introduce a novel data center network based on hybrid optical switching (HOS). HOS combines optical circuit, burst, and packet switching on the same network. In this way different data center applications can be mapped to the optical transport mechanism that best suits their traffic characteristics. Furthermore, the proposed HOS network achieves high transmission efficiency and reduced energy consumption by using two parallel optical switches. We consider the architectures of both a traditional data center network and the proposed HOS network and present a combined analytical and simulation approach for their performance and energy consumption evaluation. We demonstrate that the proposed HOS data center network achieves high performance and flexibility while considerably reducing the energy consumption of current solutions.


Sign in / Sign up

Export Citation Format

Share Document