Resource Allocation in the Integration of IoT, Fog, and Cloud Computing: State-of-the-Art and Open Challenges

Author(s):  
Baseem Al-athwari ◽  
Hossain Md Azam
2022 ◽  
pp. 1-22
Author(s):  
Vhatkar Kapil Netaji ◽  
G.P. Bhole

The allocation of resources in the cloud environment is efficient and vital, as it directly impacts versatility and operational expenses. Containers, like virtualization technology, are gaining popularity due to their low overhead when compared to traditional virtual machines and portability. The resource allocation methodologies in the containerized cloud are intended to dynamically or statically allocate the available pool of resources such as CPU, memory, disk, and so on to users. Despite the enormous popularity of containers in cloud computing, no systematic survey of container scheduling techniques exists. In this survey, an outline of the present works on resource allocation in the containerized cloud correlative is discussed. In this work, 64 research papers are reviewed for a better understanding of resource allocation, management, and scheduling. Further, to add extra worth to this research work, the performance of the collected papers is investigated in terms of various performance measures. Along with this, the weakness of the existing resource allocation algorithms is provided, which makes the researchers to investigate with novel algorithms or techniques.


Author(s):  
Gurpreet Singh ◽  
Manish Mahajan ◽  
Rajni Mohana

BACKGROUND: Cloud computing is considered as an on-demand service resource with the applications towards data center on pay per user basis. For allocating the resources appropriately for the satisfaction of user needs, an effective and reliable resource allocation method is required. Because of the enhanced user demand, the allocation of resources has now considered as a complex and challenging task when a physical machine is overloaded, Virtual Machines share its load by utilizing the physical machine resources. Previous studies lack in energy consumption and time management while keeping the Virtual Machine at the different server in turned on state. AIM AND OBJECTIVE: The main aim of this research work is to propose an effective resource allocation scheme for allocating the Virtual Machine from an ad hoc sub server with Virtual Machines. EXECUTION MODEL: The execution of the research has been carried out into two sections, initially, the location of Virtual Machines and Physical Machine with the server has been taken place and subsequently, the cross-validation of allocation is addressed. For the sorting of Virtual Machines, Modified Best Fit Decreasing algorithm is used and Multi-Machine Job Scheduling is used while the placement process of jobs to an appropriate host. Artificial Neural Network as a classifier, has allocated jobs to the hosts. Measures, viz. Service Level Agreement violation and energy consumption are considered and fruitful results have been obtained with a 37.7 of reduction in energy consumption and 15% improvement in Service Level Agreement violation.


2020 ◽  
Vol 11 (1) ◽  
pp. 149
Author(s):  
Wu-Chun Chung ◽  
Tsung-Lin Wu ◽  
Yi-Hsuan Lee ◽  
Kuo-Chan Huang ◽  
Hung-Chang Hsiao ◽  
...  

Resource allocation is vital for improving system performance in big data processing. The resource demand for various applications can be heterogeneous in cloud computing. Therefore, a resource gap occurs while some resource capacities are exhausted and other resource capacities on the same server are still available. This phenomenon is more apparent when the computing resources are more heterogeneous. Previous resource-allocation algorithms paid limited attention to this situation. When such an algorithm is applied to a server with heterogeneous resources, resource allocation may result in considerable resource wastage for the available but unused resources. To reduce resource wastage, a resource-allocation algorithm, called the minimizing resource gap (MRG) algorithm, for heterogeneous resources is proposed in this study. In MRG, the gap between resource usages for each server in cloud computing and the resource demands among various applications are considered. When an application is launched, MRG calculates resource usage and allocates resources to the server with the minimized usage gap to reduce the amount of available but unused resources. To demonstrate MRG performance, the MRG algorithm was implemented in Apache Spark. CPU- and memory-intensive applications were applied as benchmarks with different resource demands. Experimental results proved the superiority of the proposed MRG approach for improving the system utilization to reduce the overall completion time by up to 24.7% for heterogeneous servers in cloud computing.


Sign in / Sign up

Export Citation Format

Share Document