Energy Efficient Storage Management Cooperated with Large Data Intensive Applications

Author(s):  
Norifumi Nishikawa ◽  
Miyuki Nakano ◽  
Masaru Kitsuregawa
2013 ◽  
Vol 3 (1) ◽  
pp. 13-26 ◽  
Author(s):  
Sanjay P. Ahuja ◽  
Sindhu Mani

High Performance Computing (HPC) applications are scientific applications that require significant CPU capabilities. They are also data-intensive applications requiring large data storage. While many researchers have examined the performance of Amazon’s EC2 platform across some HPC benchmarks, an extensive study and their comparison between Amazon’s EC2 and Microsoft’s Windows Azure is largely missing with metrics such as memory bandwidth, I/O performance, and communication and computational performance. The purpose of this paper is to implement existing benchmarks to evaluate and analyze these metrics for EC2 and Windows Azure that span both Infrastructure-as-a-Service and Platform-as-a-Service types. This was accomplished by running MPI versions of STREAM, Interleaved or Random (IOR) and NAS Parallel (NPB) benchmarks on small and medium instance types. In addition a new EC2 medium instance type (m1.medium) was also included in the analysis. These benchmarks measure the memory bandwidth, I/O performance, communication and computational performance.


Author(s):  
Ioan Petri ◽  
Javier Diaz-Montes ◽  
Mengsong Zou ◽  
Ali Reza Zamani ◽  
Thomas H Beach ◽  
...  

Cloud computing has emerged as attractive platform for computing data intensive applications. However, efficient computation of this kind of workloads requires understanding how to store, process, and analyse large volumes of data in a timely manner. Many “smart cities” applications, for instance, identify how data from building sensors can be combined together to support applications such as emergency response, energy management, etc. Enabling sensor data to be transmitted to a cloud environment for processing provides a number of benefits, such as scalability and on-demand provisioning of computational resources. In this chapter, we propose the use of a multi-layer cloud infrastructure that distributes processing over sensing nodes, multiple intermediate/gateways nodes, and large data centres. Our solution aims at utilising the pervasive computational capabilities located at the edge of the infrastructure and along the data path to reduce data movement to large data centres located “deep” into the infrastructure and perform a more efficient use of computing and network resources.


2015 ◽  
Vol 23 (6) ◽  
pp. 1005-1016 ◽  
Author(s):  
Somnath Paul ◽  
Aswin Krishna ◽  
Wenchao Qian ◽  
Robert Karam ◽  
Swarup Bhunia

Sign in / Sign up

Export Citation Format

Share Document