Advances in Systems Analysis, Software Engineering, and High Performance Computing - Emerging Research Surrounding Power Consumption and Performance Issues in Utility Computing
Latest Publications


TOTAL DOCUMENTS

20
(FIVE YEARS 0)

H-INDEX

1
(FIVE YEARS 0)

Published By IGI Global

9781466688537, 9781466688544

Author(s):  
Robin Singh Bhadoria ◽  
Chandrakant Patil

In this chapter, we try to elaborate on how we could accommodate such a system using a mobile interface for delivering services for business as well as enterprise applications. Strategically, the problem with today's industry is that there is no established framework about how to adopt changes or to effectively utilize its IT services for any enterprise with mobile computing architecture. This chapter focuses on Mobile Interface Architecture, which can easily demonstrate the ubiquitous nature of today's computing environment.


Author(s):  
Subhadarshini Mohanty ◽  
Prashant Kumar Patra ◽  
Subasish Mohapatra

Load balancing is one of the major issue in cloud computing. Load balancing helps in achieving maximum resource utilization and user satisfaction. This mechanism transparently transfer load from heavily loaded process to under loaded process. In this paper we have proposed a hybrid technique for solving task assignment problem in cloud platform. PSO based heuristic has been developed to schedule random task in heterogeneous data centres. Here we have also used variants of Particle Swarm Optimization(PSO) which gives better result than PSO and other heuristics for load balancing in cloud computing environment.


Author(s):  
Ram Prasad Mohanty ◽  
Ashok Kumar Turuk ◽  
Bibhudatta Sahoo

The growing number of cores increases the demand for a powerful memory subsystem which leads to enhancement in the size of caches in multicore processors. Caches are responsible for giving processing elements a faster, higher bandwidth local memory to work with. In this chapter, an attempt has been made to analyze the impact of cache size on performance of Multi-core processors by varying L1 and L2 cache size on the multicore processor with internal network (MPIN) referenced from NIAGRA architecture. As the number of core's increases, traditional on-chip interconnects like bus and crossbar proves to be low in efficiency as well as suffer from poor scalability. In order to overcome the scalability and efficiency issues in these conventional interconnect, ring based design has been proposed. The effect of interconnect on the performance of multicore processors has been analyzed and a novel scalable on-chip interconnection mechanism (INOC) for multicore processors has been proposed. The benchmark results are presented by using a full system simulator. Results show that, using the proposed INoC, compared with the MPIN; the execution time are significantly reduced.


Author(s):  
Mainak Adhikari ◽  
Sukhendu Kar

Graphics processing unit (GPU), which typically handles computation only for computer graphics. Any GPU providing a functionally complete set of operations performed on arbitrary bits can compute any computable value. Additionally, the use of multiple graphics cards in one computer, or large numbers of graphics chips, further parallelizes the already parallel nature of graphics processing. CUDA (Compute Unified Device Architecture) is a parallel computing platform and programming model created by NVIDIA and implemented by the graphics processing units (GPUs). CUDA gives program developers direct access to the virtual instruction set and memory of the parallel computational elements in CUDA GPUs. This chapter first discuss some features and challenges of GPU programming and the effort to address some of the challenges with building and running GPU programming in high performance computing (HPC) environment. Finally this chapter point out the importance and standards of CUDA architecture.


Author(s):  
Kuldeep Singh Jadon ◽  
Praveen Mudgal ◽  
Robin Singh Bhadoria

In this modern era of computing, we are surrounded directly or indirectly related to the computer resources and services, and uses several programming language, different database management systems like RDBMS. At the same time, it need respective compilers and editors for different languages and the most important resource is “storage”, which could be either in the form of primary or secondary space storage. Our Industries like banking, health and education are growing with rapid demand of resources. Thus, to reduces the load of resources consumption and improves its capacity with performance, would be major focus into this chapter. This could be crafted with policy-base assignment of resources approach and adaptive self-learning with virtualization of resources for optimization. Using such approaches and methods, it helps in quality of service with higher availability, greater performance, and improved recoverability.


Author(s):  
Mohd Omar ◽  
Khaleel Ahmad ◽  
M.A. Rizvi

In a world of virtualization, where we are having a larger source of images and descriptions available to modern world and based on their requirement it has been utilized from stored information, data center or cloud to larger audience, but at same time rising number of images requires good tools to store the data and retrieve data. Along with this there is a major importance of Quick search and retrieval tools for these growing images to retrieve information quickly and accurately. High demand for automated or computer assisted classification, query and retrieval methods is required to access huge image databases because such method will try to overcome the drawback of higher cost of manual classification and retrieval of relevant image. Scope as researchers to develop automated methods in image features for indexing and retrieval of images related to texture, feature and color is in demand.


Author(s):  
Mahesh Satish Khadtare

This chapter deals with performance analysis of CUDA implementation of an image quality assessment tool based on structural similarity index (SSI). Since it had been initial created at the University of Texas in 2002, the Structural SIMilarity (SSIM) image assessment algorithm has become a valuable tool for still image and video processing analysis. SSIM provided a big giant over MSE (Mean Square Error) and PSNR (Peak Signal to Noise Ratio) techniques because it way more closely aligned with the results that would have been obtained with subjective testing. For objective image analysis, this new technique represents as significant advancement over SSIM as the advancement that SSIM provided over PSNR. The method is computationally intensive and this poses issues in places wherever real time quality assessment is desired. We tend to develop a CUDA implementation of this technique that offers a speedup of approximately 30 X on Nvidia GTX275 and 80 X on C2050 over Intel single core processor.


Author(s):  
Mainak Adhikari ◽  
Debapriya Roy

Green computing considers use of computers and related resources in an eco-friendly manner such as the implementation of energy efficiency in Servers, Peripherals etc. In recent years, companies in the computer industry realize that going green is in their best interest, both in terms of public relations and reduced costs. The principle behind energy efficient coding is to save power by getting software to make less use of the hardware, rather than continuing to run the same code on hardware that uses less power. This chapter first discuss features, challenges and impacts of green computing. Finally this chapter point out the standard and recommendation of green computing with suitable example.


Author(s):  
Mainak Adhikari ◽  
Aditi Das ◽  
Akash Mukherjee

Utility computing is envisioned to be the next generation of Information Technology (IT) evolution that depicts how computing needs of users can be fulfilled in the future IT industry. Its analogy is derived from the real world where service providers maintain and supply utility services, such as electrical power, gas, and water to consumers. Consumers' providers the services based on their usage. Therefore, the underlying design of utility computing is based on a service provisioning model, where Consumers pay providers for using computing power only when they need to. This chapter first discuss some features, challenges and impacts of utility computing. Finally this chapter point out the important, standards and recommendation of utility computing in cloud platform with a suitable example.


Author(s):  
Alexander Alling ◽  
Nathaniel R Powers ◽  
Tolga Soyata

Face recognition is a sophisticated problem requiring a significant commitment of computer resources. A modern GPU architecture provides a practical platform for performing face recognition in real time. The majority of the calculations of an eigenpicture implementation of face recognition are matrix multiplications. For this type of computation, a conventional computer GPU is capable of computing in tens of milliseconds data that a CPU requires thousands of milliseconds to process. In this chapter, we outline and examine the different components and computational requirements of a face recognition scheme implementing the Viola-Jones Face Detection Framework and an eigenpicture face recognition model. Face recognition can be separated into three distinct parts: face detection, eigenvector projection, and database search. For each, we provide a detailed explanation of the exact process along with an analysis of the computational requirements and scalability of the operation.


Sign in / Sign up

Export Citation Format

Share Document