scholarly journals Optimization Approach for Resource Allocation on Cloud Computing for IoT

2016 ◽  
Vol 12 (3) ◽  
pp. 3479247 ◽  
Author(s):  
Yeongho Choi ◽  
Yujin Lim
Author(s):  
Meenakshi Garg ◽  
Amandeep Kaur ◽  
Gaurav Dhiman

In cloud computing systems, current works do not challenge the database failure rates and recovery techniques. In this chapter, priority-based resource allocation and scheduling technique is proposed by using the metaheuristic optimization approach spotted hyena optimizer (SHO). Initially, the emperor penguins predict the workload of user server and resource requirements. The expected completion time of each server is estimated with this predicted workload. Then the resources activities are classified based on the criteria of the deadline and the asset. Further, the employed servers are classified based on the workload and the estimated completed time. The proposed approach is compared with existing resource utilization techniques in terms of percentage of resource allocation, missed deadlines, and average server workload.


Author(s):  
Gurpreet Singh ◽  
Manish Mahajan ◽  
Rajni Mohana

BACKGROUND: Cloud computing is considered as an on-demand service resource with the applications towards data center on pay per user basis. For allocating the resources appropriately for the satisfaction of user needs, an effective and reliable resource allocation method is required. Because of the enhanced user demand, the allocation of resources has now considered as a complex and challenging task when a physical machine is overloaded, Virtual Machines share its load by utilizing the physical machine resources. Previous studies lack in energy consumption and time management while keeping the Virtual Machine at the different server in turned on state. AIM AND OBJECTIVE: The main aim of this research work is to propose an effective resource allocation scheme for allocating the Virtual Machine from an ad hoc sub server with Virtual Machines. EXECUTION MODEL: The execution of the research has been carried out into two sections, initially, the location of Virtual Machines and Physical Machine with the server has been taken place and subsequently, the cross-validation of allocation is addressed. For the sorting of Virtual Machines, Modified Best Fit Decreasing algorithm is used and Multi-Machine Job Scheduling is used while the placement process of jobs to an appropriate host. Artificial Neural Network as a classifier, has allocated jobs to the hosts. Measures, viz. Service Level Agreement violation and energy consumption are considered and fruitful results have been obtained with a 37.7 of reduction in energy consumption and 15% improvement in Service Level Agreement violation.


2020 ◽  
Vol 11 (1) ◽  
pp. 149
Author(s):  
Wu-Chun Chung ◽  
Tsung-Lin Wu ◽  
Yi-Hsuan Lee ◽  
Kuo-Chan Huang ◽  
Hung-Chang Hsiao ◽  
...  

Resource allocation is vital for improving system performance in big data processing. The resource demand for various applications can be heterogeneous in cloud computing. Therefore, a resource gap occurs while some resource capacities are exhausted and other resource capacities on the same server are still available. This phenomenon is more apparent when the computing resources are more heterogeneous. Previous resource-allocation algorithms paid limited attention to this situation. When such an algorithm is applied to a server with heterogeneous resources, resource allocation may result in considerable resource wastage for the available but unused resources. To reduce resource wastage, a resource-allocation algorithm, called the minimizing resource gap (MRG) algorithm, for heterogeneous resources is proposed in this study. In MRG, the gap between resource usages for each server in cloud computing and the resource demands among various applications are considered. When an application is launched, MRG calculates resource usage and allocates resources to the server with the minimized usage gap to reduce the amount of available but unused resources. To demonstrate MRG performance, the MRG algorithm was implemented in Apache Spark. CPU- and memory-intensive applications were applied as benchmarks with different resource demands. Experimental results proved the superiority of the proposed MRG approach for improving the system utilization to reduce the overall completion time by up to 24.7% for heterogeneous servers in cloud computing.


Sign in / Sign up

Export Citation Format

Share Document