scholarly journals Multi-criteria Checkpointing Strategies: Response-Time versus Resource Utilization

Author(s):  
Aurelien Bouteiller ◽  
Franck Cappello ◽  
Jack Dongarra ◽  
Amina Guermouche ◽  
Thomas Hérault ◽  
...  
Author(s):  
Minakshi Sharma ◽  
Rajneesh Kumar ◽  
Anurag Jain

Cloud load balancing is done to persist the services in the cloud environment along with quality of service (QoS) parameters. An efficient load balancing algorithm should be based on better optimization of these QoS parameters which results in efficient scheduling. Most of the load balancing algorithms which exist consider response time or resource utilization constraints but an efficient algorithm must consider both perspectives from the user side and cloud service provider side. This article presents a load balancing strategy that efficiently allocates tasks to virtualized resources to get maximum resource utilization in minimum response time. The proposed approach, join minimum loaded queue (JMLQ), is based on the existing join idle queue (JIQ) model that has been modified by replacing idle servers in the I-queues with servers having one task in execution list. The results of simulation in CloudSim verify that the proposed approach efficiently maximizes resource utilization by reducing the response time in comparison to its other variants.


Author(s):  
Sakshi Chhabra ◽  
Ashutosh Kumar Singh

The cloud datacenter has numerous hosts as well as application requests where resources are dynamic. The demands placed on the resource allocation are diverse. These factors could lead to load imbalances, which affect scheduling efficiency and resource utilization. A scheduling method called Dynamic Resource Allocation for Load Balancing (DRALB) is proposed. The proposed solution constitutes two steps: First, the load manager analyzes the resource requirements such as CPU, Memory, Energy and Bandwidth usage and allocates an appropriate number of VMs for each application. Second, the resource information is collected and updated where resources are sorted into four queues according to the loads of resources i.e. CPU intensive, Memory intensive, Energy intensive and Bandwidth intensive. We demonstarate that SLA-aware scheduling not only facilitates the cloud consumers by resources availability and improves throughput, response time etc. but also maximizes the cloud profits with less resource utilization and SLA (Service Level Agreement) violation penalties. This method is based on diversity of client’s applications and searching the optimal resources for the particular deployment. Experiments were carried out based on following parameters i.e. average response time; resource utilization, SLA violation rate and load balancing. The experimental results demonstrate that this method can reduce the wastage of resources and reduces the traffic upto 44.89% and 58.49% in the network.


2021 ◽  
Vol 2111 (1) ◽  
pp. 012054
Author(s):  
M.A. Hamid ◽  
S.A. Rahman ◽  
I.A. Darmawan ◽  
M. Fatkhurrokhman ◽  
M. Nurtanto

Abstract Testing the performance efficiency aspect was carried out to test the performance efficiency of the Unity 3D and Blender-based virtual laboratory media during the COVID-19 pandemic at the Electrical Engineering Vocational Laboratory. This test is carried out to test the performance of the media that has been created. The aspects tested are access speed, process speed, and simulation speed when run. Tests were conducted to measure processor and memory consumption through real time monitoring using MSI Afterburner. Divided into 2 stages of testing, namely time behavior and resource utilization. Time-behavior is focused on how long it takes the media or software to provide a response time to perform an action from a certain function. Resource-utilization is the degree to which software uses some resources when doing something under certain conditions.


Author(s):  
Yi Cheng Ren ◽  
Junichi Suzuki ◽  
Shingo Omura ◽  
Ryuichi Hosoya

This paper proposes and evaluates a multi-objective evolutionary game theoretic framework for adaptive and stable application deployment in clouds that support dynamic voltage and frequency scaling (DVFS) for CPUs. The proposed algorithm, called AGEGT, aids cloud operators to adapt the resource allocation to applications and their locations according to the operational conditions in a cloud (e.g. workload and resource availability) with respect to multiple conflicting objectives such as response time, resource utilization and power consumption. In AGEGT, evolutionary multiobjective games are performed on application deployment strategies (i.e. solution candidates) with an aid of guided local search. AGEGT theoretically guarantees that each application performs an evolutionarily stable deployment strategy, which is an equilibrium solution under given operational conditions. Simulation results verify this theoretical analysis; applications seek equilibria to perform adaptive and evolutionarily stable deployment strategies. AGEGT allows applications to successfully leverage DVFS to balance their response time, resource utilization and power consumption. AGEGT gains performance improvement via guided local search and outperforms existing heuristics such as first-fit and best-fit algorithms (FFA and BFA) as well as NSGA-II.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 4071-4075

Cloud computing is defined as the resource that can be delivered or accessed by the local host from the remote server via the internet. Cloud providers typically use a "pay-as-you-go" model. The evolution of cloud computing has led to the evolution of modern environment due to abundance and advancement of computing and communication infrastructure. During user request, and system response generation, an amount load will be assigned in the cloud computing, where it may be over or under load. Due to heavy load, power consumption and energy management problems are created, and it makes system failure and lead data loss. Though, an efficient load balancing method is compulsory to overcome all mentioned problems. The objective of this work is to develop a metaheuristic load balancing algorithm to migrate multi-server for load balancing and machine learning techniques is used to increase the cloud resource utilization and minimize the make-span time of the task. Using an unsupervised machine learning technique, it is possible to predict the correct response time and waiting time of the servers by getting the prior knowledge about the virtual machines and its clusters. And this work involves to calculate the accuracy rate of the Round-Robin load balancing algorithm and then compared it with a proposed algorithm. By this work, the response time and waiting time can be minimized and also it increases the resource utilization and minimize the make- span time of the task.


2016 ◽  
Vol 15 (4) ◽  
pp. 6681-6685
Author(s):  
Parveen Kaur ◽  
Monika Sachdeva

Now a days every organization is migrating towards  cloud computing as cloud computing is considered being more flexible and scalable as compared to other technologies. The technology simply means to provide the computing resources and services through a network. This paper discusses the existing approaches for scheduling algorithms that can maintain the load balancing and provides better improved strategies through efficient job scheduling and modified resource allocation techniques. The load can be CPU load, memory capacity, delay or network load. Load balancing is the process of distributing the load among various nodes of a distributed system to improve both resource utilization and job response time while also avoiding a situation where some of the nodes are heavily loaded while other nodes are idle or doing very little work. Load balancing ensures that all the processor in the system or every node in the network does approximately the equal amount of work at any instant of time. 


Author(s):  
Deni Marta ◽  
M. Angga Eka Putra ◽  
Guntoro Barovih

Cloud Computing provides convenience and comfort to every service. Infrastructure as a Service is one of the cloud computing services that is a choice of several users, it is very important to know the performance of each existing platform in order to get the maximum result according to our needs. In this study, testing 3 platforms of cloud computing service providers are VMWare ESXi, XenServer, and Proxmox, using action research methods. From the results of performance measurements, then analyzed and compared with the minimum and maximum limits. The tested indicators are response time, throughput, and resource-utilization as a comparison of server virtualization performance implementations. In the resource utilization testing when the condition of installing an operating system, CPU usage on the Proxmox platform shows the lowest usage of 10.72%, and the lowest RAM usage of 53.32% also on the Proxmox platform. In the resource test utilization when idle state shows the lowest usage of 5.78% on the Proxmox platform, while the lowest RAM usage is 57.25% on the VMWare ESXi platform. The mean resource utilization tests indicate that the Proxmox platform is better. At the throughput test when the upload measurement of the XenServer platform is better 1.37 MB/s, while the throughput test when the download of the VMWare ESXi platform is better than 1.39 MB/s. On response time testing shows the platform VMWare ESXi as the fastest is 0.180 sec.


Computers ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 138
Author(s):  
Armin Lawi ◽  
Benny L. E. Panggabean ◽  
Takaichi Yoshida

Currently, most middleware application developers have two choices when designing or implementing Application Programming Interface (API) services; i.e., they can either stick with Representational State Transfer (REST) or explore the emerging GraphQL technology. Although REST is widely regarded as the standard method for API development, GraphQL is believed to be revolutionary in overcoming the main drawbacks of REST, especially data-fetching issues. Nevertheless, doubts still remain, as there are no investigations with convincing results in evaluating the performance of the two services. This paper proposes a new research methodology to evaluate the performance of REST and GraphQL API services with two main ideas as novelties. The first novel method is the evaluation of the two services is performed on the real ongoing operation of the management information system, where massive and intensive query transactions take place on a complex database with many relationships. The second is the fair and independent performance evaluation results obtained by distributing client requests and synchronizing the service responses on the two virtually separated parallel execution paths for each API service, respectively. The performance evaluation was investigated using basic measures of QoS (Quality of Services), i.e., response time, throughput, CPU load, and memory usage. We use the term efficiency in comparing the evaluation results to capture differences in their performance measures. The statistical hypothesis parameters test using the two-tails paired t-test, and boxplot visualization was also given to confirm the significance of the comparison results. The results showed REST is still faster up to 50.50% in response time and 37.16% for throughput, while GraphQL is very efficient in resource utilization, i.e., 37.26% for CPU load and 39.74% for memory utilization. Therefore, GraphQL is the right choice when data requirements change frequently, and resource utilization is the most important consideration. REST is used when some data are frequently accessed and called by multiple requests.


Sign in / Sign up

Export Citation Format

Share Document