scholarly journals Host Utilization Algorithm to Reduce Energy Consumption in Cloud Computing

The utilization of distributed computing server farm is developing quickly to fulfill the large increment required for systems administration, High-Performance Computing(HPC) as well as stockpiling assets for executing business and logical applications. The process of Virtual Machine (VM) solidification is inclusive of VMs getting relocated in order to make use of less physical servers. As a result, it enables the shut down or lowpower mode of more number of servers which enhances the vitality utilization effectiveness, working expense and CO2 discharge. An urgent advance in VM union is have over-burden discovery, which endeavors to foresee whether a physical server is going to be oversubscribed with VMs. On the contrary to usual studies which performed utilization of CPU being the standalone indicator for host overload, a multiple correlation host overload detection algorithm was proposed in the recent study by considering a lot of factors in this regard. A higher load balance model was introduced in this text for the general public cloud, supported by the concept of cloud partitioning, in addition to a switch mechanism used to strategize differently under different scenarios. The IP address is generally shared by a true server and carbo balance. In this regard, the load balancer considers the interface developed with IP address which accepts request packets and the packets are directed to the selected servers. With an aim to improve the efficiency in public cloud environment, the algorithm employed the sport theory in the load balancing strategy.

Author(s):  
Manoj Himmatrao Devare

The scientist, engineers, and researchers highly need the high-performance computing (HPC) services for executing the energy, engineering, environmental sciences, weather, and life science simulations. The virtual machine (VM) or docker-enabled HPC Cloud service provides the advantages of consolidation and support for multiple users in public cloud environment. Adding the hypervisor on the top of bare metal hardware brings few challenges like the overhead of computation due to virtualization, especially in HPC environment. This chapter discusses the challenges, solutions, and opportunities due to input-output, VMM overheads, interconnection overheads, VM migration problems, and scalability problems in HPC Cloud. This chapter portrays HPC Cloud as highly complex distributed environment consisting of the heterogeneous types of architectures consisting of the different processor architectures, inter-connectivity techniques, the problems of the shared memory, distributed memory, and hybrid architectures in distributed computing like resilience, scalability, check-pointing, and fault tolerance.


Author(s):  
Manoj Himmatrao Devare

The scientist, engineers, and researchers highly need the high-performance computing (HPC) services for executing the energy, engineering, environmental sciences, weather, and life science simulations. The virtual machine (VM) or docker-enabled HPC Cloud service provides the advantages of consolidation and support for multiple users in public cloud environment. Adding the hypervisor on the top of bare metal hardware brings few challenges like the overhead of computation due to virtualization, especially in HPC environment. This chapter discusses the challenges, solutions, and opportunities due to input-output, VMM overheads, interconnection overheads, VM migration problems, and scalability problems in HPC Cloud. This chapter portrays HPC Cloud as highly complex distributed environment consisting of the heterogeneous types of architectures consisting of the different processor architectures, inter-connectivity techniques, the problems of the shared memory, distributed memory, and hybrid architectures in distributed computing like resilience, scalability, check-pointing, and fault tolerance.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7342
Author(s):  
Bhavya Alankar ◽  
Gaurav Sharma ◽  
Harleen Kaur ◽  
Raul Valverde ◽  
Victor Chang

Cloud computing has emerged as the primary choice for developers in developing applications that require high-performance computing. Virtualization technology has helped in the distribution of resources to multiple users. Increased use of cloud infrastructure has led to the challenge of developing a load balancing mechanism to provide optimized use of resources and better performance. Round robin and least connections load balancing algorithms have been developed to allocate user requests across a cluster of servers in the cloud in a time-bound manner. In this paper, we have applied the round robin and least connections approach of load balancing to HAProxy, virtual machine clusters and web servers. The experimental results are visualized and summarized using Apache Jmeter and a further comparative study of round robin and least connections is also depicted. Experimental setup and results show that the round robin algorithm performs better as compared to the least connections algorithm in all measuring parameters of load balancer in this paper.


2018 ◽  
Author(s):  
Gabriel B. Moro ◽  
Lucas Mello Schnorr

Performance and energy consumption are fundamental requirements in computer systems. A very frequent challenge is to combine both aspects, searching to keep the high performance computing while consuming less energy. There are a lot of techniques to reduce energy consumption, but in general, they use modern processors resources or they require specific knowledge about application and platform used. In this paper, we propose a library that dynamically changes the processor frequency according to the application's computing behavior, using a previous analysis of its Memory-Bound regions. The results show a reduction of 1,89% in energy consumption for Lulesh application with an increase of 0,09% in runtime when we compare our approach against the governor Ondemand of the Linux Operating System.


Author(s):  
Mark H. Ellisman

The increased availability of High Performance Computing and Communications (HPCC) offers scientists and students the potential for effective remote interactive use of centralized, specialized, and expensive instrumentation and computers. Examples of instruments capable of remote operation that may be usefully controlled from a distance are increasing. Some in current use include telescopes, networks of remote geophysical sensing devices and more recently, the intermediate high voltage electron microscope developed at the San Diego Microscopy and Imaging Resource (SDMIR) in La Jolla. In this presentation the imaging capabilities of a specially designed JEOL 4000EX IVEM will be described. This instrument was developed mainly to facilitate the extraction of 3-dimensional information from thick sections. In addition, progress will be described on a project now underway to develop a more advanced version of the Telemicroscopy software we previously demonstrated as a tool to for providing remote access to this IVEM (Mercurio et al., 1992; Fan et al., 1992).


MRS Bulletin ◽  
1997 ◽  
Vol 22 (10) ◽  
pp. 5-6
Author(s):  
Horst D. Simon

Recent events in the high-performance computing industry have concerned scientists and the general public regarding a crisis or a lack of leadership in the field. That concern is understandable considering the industry's history from 1993 to 1996. Cray Research, the historic leader in supercomputing technology, was unable to survive financially as an independent company and was acquired by Silicon Graphics. Two ambitious new companies that introduced new technologies in the late 1980s and early 1990s—Thinking Machines and Kendall Square Research—were commercial failures and went out of business. And Intel, which introduced its Paragon supercomputer in 1994, discontinued production only two years later.During the same time frame, scientists who had finished the laborious task of writing scientific codes to run on vector parallel supercomputers learned that those codes would have to be rewritten if they were to run on the next-generation, highly parallel architecture. Scientists who are not yet involved in high-performance computing are understandably hesitant about committing their time and energy to such an apparently unstable enterprise.However, beneath the commercial chaos of the last several years, a technological revolution has been occurring. The good news is that the revolution is over, leading to five to ten years of predictable stability, steady improvements in system performance, and increased productivity for scientific applications. It is time for scientists who were sitting on the fence to jump in and reap the benefits of the new technology.


Sign in / Sign up

Export Citation Format

Share Document