Improving the energy efficiency and performance of data-intensive workflows in virtualized clouds

2018 ◽  
Vol 74 (7) ◽  
pp. 2935-2955 ◽  
Author(s):  
Xilong Qu ◽  
Peng Xiao ◽  
Lirong Huang
Energies ◽  
2021 ◽  
Vol 14 (14) ◽  
pp. 4089
Author(s):  
Kaiqiang Zhang ◽  
Dongyang Ou ◽  
Congfeng Jiang ◽  
Yeliang Qiu ◽  
Longchuan Yan

In terms of power and energy consumption, DRAMs play a key role in a modern server system as well as processors. Although power-aware scheduling is based on the proportion of energy between DRAM and other components, when running memory-intensive applications, the energy consumption of the whole server system will be significantly affected by the non-energy proportion of DRAM. Furthermore, modern servers usually use NUMA architecture to replace the original SMP architecture to increase its memory bandwidth. It is of great significance to study the energy efficiency of these two different memory architectures. Therefore, in order to explore the power consumption characteristics of servers under memory-intensive workload, this paper evaluates the power consumption and performance of memory-intensive applications in different generations of real rack servers. Through analysis, we find that: (1) Workload intensity and concurrent execution threads affects server power consumption, but a fully utilized memory system may not necessarily bring good energy efficiency indicators. (2) Even if the memory system is not fully utilized, the memory capacity of each processor core has a significant impact on application performance and server power consumption. (3) When running memory-intensive applications, memory utilization is not always a good indicator of server power consumption. (4) The reasonable use of the NUMA architecture will improve the memory energy efficiency significantly. The experimental results show that reasonable use of NUMA architecture can improve memory efficiency by 16% compared with SMP architecture, while unreasonable use of NUMA architecture reduces memory efficiency by 13%. The findings we present in this paper provide useful insights and guidance for system designers and data center operators to help them in energy-efficiency-aware job scheduling and energy conservation.


Electronics ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1471
Author(s):  
Jun-Yeong Lee ◽  
Moon-Hyun Kim ◽  
Syed Asif Raza Raza Shah ◽  
Sang-Un Ahn ◽  
Heejun Yoon ◽  
...  

Data are important and ever growing in data-intensive scientific environments. Such research data growth requires data storage systems that play pivotal roles in data management and analysis for scientific discoveries. Redundant Array of Independent Disks (RAID), a well-known storage technology combining multiple disks into a single large logical volume, has been widely used for the purpose of data redundancy and performance improvement. However, this requires RAID-capable hardware or software to build up a RAID-enabled disk array. In addition, it is difficult to scale up the RAID-based storage. In order to mitigate such a problem, many distributed file systems have been developed and are being actively used in various environments, especially in data-intensive computing facilities, where a tremendous amount of data have to be handled. In this study, we investigated and benchmarked various distributed file systems, such as Ceph, GlusterFS, Lustre and EOS for data-intensive environments. In our experiment, we configured the distributed file systems under a Reliable Array of Independent Nodes (RAIN) structure and a Filesystem in Userspace (FUSE) environment. Our results identify the characteristics of each file system that affect the read and write performance depending on the features of data, which have to be considered in data-intensive computing environments.


2014 ◽  
Vol 22 (2) ◽  
pp. 173-185 ◽  
Author(s):  
Eli Dart ◽  
Lauren Rotman ◽  
Brian Tierney ◽  
Mary Hester ◽  
Jason Zurawski

The ever-increasing scale of scientific data has become a significant challenge for researchers that rely on networks to interact with remote computing systems and transfer results to collaborators worldwide. Despite the availability of high-capacity connections, scientists struggle with inadequate cyberinfrastructure that cripples data transfer performance, and impedes scientific progress. The ScienceDMZparadigm comprises a proven set of network design patterns that collectively address these problems for scientists. We explain the Science DMZ model, including network architecture, system configuration, cybersecurity, and performance tools, that creates an optimized network environment for science. We describe use cases from universities, supercomputing centers and research laboratories, highlighting the effectiveness of the Science DMZ model in diverse operational settings. In all, the Science DMZ model is a solid platform that supports any science workflow, and flexibly accommodates emerging network technologies. As a result, the Science DMZ vastly improves collaboration, accelerating scientific discovery.


2021 ◽  
Author(s):  
Ramkumar Iyer ◽  
Zhichao Chen ◽  
PS SATYANARAYANA ◽  
ANTARA BHATTACHARJEE ◽  
NAVNEET JHA ◽  
...  

Author(s):  
Sanjay P. Ahuja ◽  
Neha Soni

Web 2.0 applications have become ubiquitous over the past few years because they provide useful features such as a rich, responsive graphical user interface that supports interactive and dynamic content. Social networking websites, blogs, auctions, online banking, online shopping and video sharing websites are noteworthy examples of Web 2.0 applications. The market for public cloud service providers is growing rapidly, and cloud providers offer an ever-growing list of services. As a result, developers and researchers find it challenging when deciding which public cloud service to use for deploying, experimenting or testing Web 2.0 applications. This study compares the scalability and performance of a social-events calendar application on two Infrastructure as a Service (IaaS) cloud services – Amazon EC2 and HP Cloud. This study captures and compares metrics on three different instance configurations for each cloud service such as the number of concurrent users (load), as well as response time and throughput (performance). Additionally, the total price of the three different instance configurations for each cloud service is calculated and compared. This comparison of the scalability, performance and price metrics provides developers and researchers with an insight into the scalability and performance characteristics of the three instance configurations for each cloud service, which simplifies the process of determining which cloud service and instance configuration to use for deploying their Web 2.0 applications. This study uses CloudStone – an open-source, three-tier web application benchmarking tool that simulates Web 2.0 application activities – as a realistic workload generator and to capture the intended metrics. The comparison of the collected metrics indicates that all of the tested Amazon EC2 instance configurations provide better scalability and lower latency at a lower cost than the respective HP Cloud instance configurations; however, the tested HP Cloud instance configurations provide a greater storage capacity than the Amazon EC2 instance configurations, which is an important consideration for data-intensive Web 2.0 applications.


Author(s):  
Miguel Bordallo López

Computer vision can be used to increase the interactivity of existing and new camera-based applications. It can be used to build novel interaction methods and user interfaces. The computing and sensing needs of this kind of applications require a careful balance between quality and performance, a practical trade-off. This chapter shows the importance of using all the available resources to hide application latency and maximize computational throughput. The experience gained during the developing of interactive applications is utilized to characterize the constraints imposed by the mobile environment, discussing the most important design goals: high performance and low power consumption. In addition, this chapter discusses the use of heterogeneous computing via asymmetric multiprocessing to improve the throughput and energy efficiency of interactive vision-based applications.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4779
Author(s):  
Sorin Buzura ◽  
Bogdan Iancu ◽  
Vasile Dadarlat ◽  
Adrian Peculea ◽  
Emil Cebuc

Software-defined wireless sensor networking (SDWSN) is an emerging networking architecture which is envisioned to become the main enabler for the internet of things (IoT). In this architecture, the sensors plane is managed by a control plane. With this separation, the network management is facilitated, and performance is improved in dynamic environments. One of the main issues a sensor environment is facing is the limited lifetime of network devices influenced by high levels of energy consumption. The current work proposes a system design which aims to improve the energy efficiency in an SDWSN by combining the concepts of content awareness and adaptive data broadcast. The purpose is to increase the sensors’ lifespan by reducing the number of generated data packets in the resource-constrained sensors plane of the network. The system has a distributed management approach, with content awareness being implemented at the individual programmable sensor level and the adaptive data broadcast being performed in the control plane. Several simulations were run on historical weather and the results show a significant decrease in network traffic. Compared to similar work in this area which focuses on improving energy efficiency with complex algorithms for routing, clustering, or caching, the current proposal employs simple computing procedures on each network device with a high impact on the overall network performance.


Sign in / Sign up

Export Citation Format

Share Document